ACS880-MU-ZCU-1214机器人模块卡件
问题的规范
课程名称人工神经网络和深度学习课程代码AI625课程类型选修硕士(第二周期)第1年/第2学期或第2年/第1学期教师姓名TBA ECTS每周7次讲座每周最多6次远程会议实验室无课程目的和目标本课程有两个目的:首先,它介绍了人工智能的基本方面神经网络(ANN),其次介绍了更高级的主题,如深度学习(DP)网络。在简单回顾了传统神经网络和学习过程之后,本课程将介绍深度网络的现代实践,包括训练、优化、卷积网络、递归和递归网络。此外,本课程将侧重于深度学习系统的设计、数据预处理、超参数选择、实现和性能评估的实用方法,以及深度学习技术在现实问题中的应用,如大数据挖掘、图像处理和自然语言处理。学习成果成功完成本课程后,学生应能够: 回顾人工神经网络基础知识的基本算法和方法及其训练算法。 讨论、解释和报告针对特定问题的各种深度学习算法。 选择并解释合适的算法/方法,以满足问题的规范。 分析并结合已知技术,以面对现实问题。 分析和预处理给定数据,以适应ANN和DL算法/方法。 将不同的系统部分(预处理数据、实现、算法、用户界面)放在一起,以创建一个新的操作学习系统。 评估开发的学习系统的性能。 讨论几种类型的深度神经网络的基本概念。 将深度学习方法应用于各种任务。
基础知识
先决条件先决条件课程内容机器学习基础知识:介绍传统机器学习技术的一些观点,如对深度学习算法发展产生重大影响的神经网络(NN)。在简要介绍了神经网络之后,我们将讨论神经元的模型和网络架构。然后,介绍了不同类型的学习过程。最后,讨论了单层感知器训练的几个方面。深度前馈网络:用于函数逼近的深度学习神经网络模式的表示。讨论了一个简单的学习示例和基于梯度的学习,以及隐藏单元和架构设计等其他方面。然后,介绍了用于深度学习的反向传播算法及其变体的基础。对相关算法进行了广泛的分析,并讨论了实现方面。深度学习正则化:介绍用于深度网络模型正则化和优化的选定高级技术,如参数的范数惩罚、约束优化的范数处罚和数据集增强。此外,还讨论了半监督学习范式和特征提取技术、装袋和集成方法。训练深度模型的优化:将讨论训练优化的几个挑战,如参数优化和自适应学习率。此外,给出并分析了相关算法以及优化策略和元算法。卷积神经网络(CNN):介绍卷积网络用于扩展到大数据集。介绍CNN的主要构建块,如卷积滤波器及其特性(步长、深度、宽度)、激活函数、池化算子。讨论了卷积运算的几个方面,提出了随机或无监督特征的有效算法,以及卷积网络的神经科学基础。序列建模:递归和递归神经网络(RNN):提出了用于时间序列处理的深度递归和递归网络。讨论了长期依赖性的挑战、长期排序记忆的描述和其他选通机制以及优化方面。实用方法论:讨论了设计、构建和配置涉及深度学习的应用程序的实用方法论的一般指导原则。这些方面包括性能指标、基线模型、收集更多数据、超参数的选择和调试策略。一个例子显示了如何面对这些方面。
Problem specification
Course Name Artificial Neural Network and Deep Learning Course Code AI625 Course Type Elective Master (Second Cycle) Year 1/Second Semester or Year 2/First Semester Teacher Name TBA ECTS 7 lectures a week Up to 6 teleconferences a week Laboratory has no course goals and objectives This course has two purposes: first, it introduces the basic aspects of artificial intelligence Neural Network (ANN), and second, it introduces more advanced topics, Such as deep learning (DP) network. After a brief review of traditional neural networks and learning processes, this course will introduce modern practices of deep networks, including training, optimization, convolutional networks, recursive and recursive networks. In addition, this course will focus on the design of deep learning system, data preprocessing, super parameter selection, implementation and practical methods of performance evaluation, as well as the application of deep learning technology in practical problems, such as big data mining, image processing and natural language processing. After successful completion of this course, students should be able to: review the basic algorithms and methods of the basic knowledge of artificial neural networks and their training algorithms. Discuss, explain and report various deep learning algorithms for specific problems. Select and explain appropriate algorithms/methods to meet the specification of the problem. Analyze and combine known technologies to face real problems. Analyze and preprocess given data to adapt to ANN and DL algorithms/methods. Put different system parts (preprocessing data, implementation, algorithm, user interface) together to create a new operating learning system. Evaluate the performance of the developed learning system. Discuss the basic concepts of several types of deep neural networks. Apply the deep learning method to various tasks.
Basic knowledge
Prerequisites Prerequisites Course content Fundamentals of machine learning: Introduce some viewpoints of traditional machine learning technology, such as neural network (NN), which has a significant impact on the development of deep learning algorithm. After a brief introduction of neural networks, we will discuss the model and network architecture of neurons. Then, it introduces different types of learning processes. Finally, several aspects of single-layer perceptron training are discussed. Deep feedforward network: representation of deep learning neural network pattern for function approximation. A simple learning example and gradient based learning, as well as other aspects such as hidden unit and architecture design are discussed. Then, the back-propagation algorithm for deep learning and the basis of its variants are introduced. The related algorithms are analyzed extensively, and the implementation aspects are discussed. Deep learning regularization: introduce selected advanced techniques for regularization and optimization of deep network models, such as norm penalty for parameters, norm penalty for constrained optimization, and dataset enhancement. In addition, semi supervised learning paradigm, feature extraction technology, bagging and integration methods are also discussed. Optimization of training depth model: several challenges of training optimization, such as parameter optimization and adaptive learning rate, will be discussed. In addition, the related algorithms, optimization strategies and meta algorithms are presented and analyzed. Convolutional neural network (CNN): Introduce the convolutional network used to expand to large data sets. Introduce the main building blocks of CNN, such as convolution filter and its characteristics (step size, depth, width), activation function, pooling operator. This paper discusses several aspects of convolution operation, puts forward effective algorithms of random or unsupervised features, and the neural science basis of convolution network. Sequence modeling: Recursive and recursive neural networks (RNN): proposed deep recursive and recursive networks for time series processing. The challenges of long-term dependence, the description of long-term sorting memory, other gating mechanisms and optimization aspects are discussed. Practical Methodology: Discusses the general guiding principles of a practical methodology for designing, building, and configuring applications that involve deep learning. These include performance metrics, baseline models, collecting more data, super parameter selection, and debugging strategies. An example shows how to face these aspects.