斯坦福大学公开课 机器学习

Course Description课程简介

This course provides a broad introduction to machine learning and statistical pattern recognition. 
本课程提供了对于机器学习和统计学图形识别方面的介绍。
Topics include: supervised learning (generative/discriminative learning, parametric/non-parametric learning, neural networks, support vector machines); unsupervised learning (clustering dimensionality reduction, kernel methods); learning theory (bias/variance tradeoffs; VC theory; large margins); reinforcement learning and adaptive control. 

话题包括:监督学习(生成/区分学习,参数/无参数学习,神经网络,支持向量机);无监督学习(聚类维度下降,核方法);学习理论(偏差/方差权衡;VC理论;大差数);强化学习与自适应控制。
The course will also discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing.

本课程还将讨论最近的应用,例如机器控制,数据挖掘,自主导航,生物信息学,语音识别以及文本和Web数据处理。
Students are expected to have the following background:学生应该具备如下的知识背景:

Prerequisites: – Knowledge of basic computer science principles and skills, at a level sufficient to write a reasonably non-trivial computer program.

预备知识:-基本的计算机科学理论和技巧,在一个能写出合理的不一般的计算机程序的水平。
– Familiarity with the basic probability theory. (Stat 116 is sufficient but not necessary.)熟悉基本的概率理论。
– Familiarity with the basic linear algebra (any one of Math 51, Math 103, Math 113, or CS 205 would be much more than necessary.)熟悉基本的线性代数。

https://see.stanford.edu/Course/CS229

课程大纲

Lecture 1 机器学习的动机与应用
Topics: The Motivation & Applications of Machine Learning, The Logistics of the Class, The Definition of Machine Learning, The Overview of Supervised Learning, The Overview of Learning Theory, The Overview of Unsupervised Learning, The Overview of Reinforcement Learning

主题:机器学习的动机和应用、课程的组织、机器学习的定义、监督学习概述、学习理论概述、非监督学习概述、强化学习概述

Lecture 2 监督学习应用.梯度下降
Topics: An Application of Supervised Learning – Autonomous Deriving, ALVINN, Linear Regression, Gradient Descent, Batch Gradient Descent, Stochastic Gradient Descent (Incremental Descent), Matrix Derivative Notation for Deriving Normal Equations, Derivation of Normal Equations

主题:一个监督学习的应用-自动求导,ALVINN,线性回归,梯度下降,批处理梯度下降,随机梯度下降(增益梯度下降),表示求导正规方程组的向量导数符号,正规等式的导数

Lecture 3 欠拟合与过拟合的概念
Topics: The Concept of Underfitting and Overfitting, The Concept of Parametric Algorithms and Non-parametric Algorithms, Locally Weighted Regression, The Probabilistic Interpretation of Linear Regression, The motivation of Logistic Regression, Logistic Regression, Perceptron

主题:欠拟合和过拟合的概念,有参数的算法和无参数算法的概念,局部加权回归,线性拟合的概率解释,逻辑回归的动机,逻辑回归,感知器

Lecture 4 牛顿方法
Topics: Newton’s Method, Exponential Family, Bernoulli Example, Gaussian Example, General Linear Models (GLMs), Multinomial Example, Softmax Regression

主题:牛顿方法,指数家族,伯努利例子,高斯例子,通用线性模型(GLMs),多项式例子,Softmax回归

Lecture 5 生成学习算法
Topics: Discriminative Algorithms, Generative Algorithms, Gaussian Discriminant Analysis (GDA), GDA and Logistic Regression, Naive Bayes, Laplace Smoothing

主题:判别学习法,生成学习算法,高斯判别分析(GDA),GDA和Logistic回归,Naive贝叶斯,拉普拉斯平滑

Lecture 6 朴素贝叶斯算法
Topics: Multinomial Event Model, Non-linear Classifiers, Neural Network, Applications of Neural Network, Intuitions about Support Vector Machine (SVM), Notation for SVM, Functional and Geometric Margins

主题:多项式事件模型,非线性分类,神经网络,神经网络的应用,支持向量机的直觉,支持向量机的符号,功能上和几何上的间隔

Lecture 7 最优间隔分类器问题
Topics: Optimal Margin Classifier, Lagrange Duality, Karush-Kuhn-Tucker (KKT) Conditions, SVM Dual, The Concept of Kernels

主题:最优间隔分类,拉格朗日对偶,KKT条件,SVM对偶,核的概念

Lecture 8 顺序最小优化算法
Topics: Kernels, Mercer’s Theorem, Non-linear Decision Boundaries and Soft Margin SVM, Coordinate Ascent Algorithm, The Sequential Minimization Optimization (SMO) Algorithm, Applications of SVM

主题:核,

Lecture 9 经验风险最小化
Topics: Bias/variance Tradeoff, Empirical Risk Minimization (ERM), The Union Bound, Hoeffding Inequality, Uniform Convergence – The Case of Finite H, Sample Complexity Bound, Error Bound, Uniform Convergence Theorem & Corollary

Lecture 10 特征选择
Topics: Uniform Convergence – The Case of Infinite H, The Concept of ‘Shatter’ and VC Dimension, SVM Example, Model Selection, Cross Validation, Feature Selection

Lecture 11 贝叶斯统计正则化
Topics: Bayesian Statistics and Regularization, Online Learning, Advice for Applying Machine Learning Algorithms, Debugging/fixing Learning Algorithms, Diagnostics for Bias & Variance, Optimization Algorithm Diagnostics, Diagnostic Example – Autonomous Helicopter, Error Analysis, Getting Started on a Learning Problem

Lecture 12 K-means算法
Topics: The Concept of Unsupervised Learning, K-means Clustering Algorithm, K-means Algorithm, Mixtures of Gaussians and the EM Algorithm, Jensen’s Inequality, The EM Algorithm, Summary

Lecture 13 高斯混合模型
Topics: Mixture of Gaussian, Mixture of Naive Bayes – Text clustering (EM Application), Factor Analysis, Restrictions on a Covariance Matrix, The Factor Analysis Model, EM for Factor Analysis

Lecture 14 主成分分析法
Topics: The Factor Analysis Model,0 EM for Factor Analysis, Principal Component Analysis (PCA), PCA as a Dimensionality Reduction Algorithm, Applications of PCA, Face Recognition by Using PCA

Lecture 15 奇异值分解
Topics: Latent Semantic Indexing (LSI), Singular Value Decomposition (SVD) Implementation, Independent Component Analysis (ICA), The Application of ICA, Cumulative Distribution Function (CDF), ICA Algorithm, The Applications of ICA

Lecture 16 马尔可夫决策过程
Topics: Applications of Reinforcement Learning, Markov Decision Process (MDP), Defining Value & Policy Functions, Value Function, Optimal Value Function, Value Iteration, Policy Iteration

Lecture 17 离散与维数灾难
Topics: Generalization to Continuous States, Discretization & Curse of Dimensionality, Models/Simulators, Fitted Value Iteration, Finding Optimal Policy

Lecture 18 线性二次型调节控制
Topics: State-action Rewards, Finite Horizon MDPs, The Concept of Dynamical Systems, Examples of Dynamical Models, Linear Quadratic Regulation (LQR), Linearizing a Non-Linear Model, Computing Rewards, Riccati Equation

Lecture 19 微分动态规划
Topics: Advice for Applying Machine Learning, Debugging Reinforcement Learning (RL) Algorithm, Linear Quadratic Regularization (LQR), Differential Dynamic Programming (DDP), Kalman Filter & Linear Quadratic Gaussian (LQG), Predict/update Steps of Kalman Filter, Linear Quadratic Gaussian (LQG)

Lecture 20 策略搜索
Topics: Partially Observable MDPs (POMDPs), Policy Search, Reinforce Algorithm, Pegasus Algorithm, Pegasus Policy Search, Applications of Reinforcement Learning

    原文作者:机器学习
    原文地址: https://www.cnblogs.com/2008nmj/p/8215653.html
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞