【導讀】UC.Berkeley CS189 《Introduction to Machine Learning》是面向初學者的機器學習課程在本指南中,我們創建了一個全面的課程指南,以便與學生和公眾分享我們的知識,并希望吸引其他大學的學生對伯克利的機器學習課程感興趣。
講義目錄:
Note 1: Introduction
Note 2: Linear Regression
Note 3: Features, Hyperparameters, Validation
Note 4: MLE and MAP for Regression (Part I)
Note 5: Bias-Variance Tradeoff
Note 6: Multivariate Gaussians
Note 7: MLE and MAP for Regression (Part II)
Note 8: Kernels, Kernel Ridge Regression
Note 9: Total Least Squares
Note 10: Principal Component Analysis (PCA)
Note 11: Canonical Correlation Analysis (CCA)
Note 12: Nonlinear Least Squares, Optimization
Note 13: Gradient Descent Extensions
Note 14: Neural Networks
Note 15: Training Neural Networks
Note 16: Discriminative vs. Generative Classification, LS-SVM
Note 17: Logistic Regression
Note 18: Gaussian Discriminant Analysis
Note 19: Expectation-Maximization (EM) Algorithm, k-means Clustering
Note 20: Support Vector Machines (SVM)
Note 21: Generalization and Stability
Note 22: Duality
Note 23: Nearest Neighbor Classification
Note 24: Sparsity
Note 25: Decision Trees and Random Forests
Note 26: Boosting
Note 27: Convolutional Neural Networks (CNN)
討論目錄:
Discussion 0: Vector Calculus, Linear Algebra (solution)
Discussion 1: Optimization, Least Squares, and Convexity (solution)
Discussion 2: Ridge Regression and Multivariate Gaussians (solution)
Discussion 3: Multivariate Gaussians and Kernels (solution)
Discussion 4: Principal Component Analysis (solution)
Discussion 5: Least Squares and Kernels (solution)
Discussion 6: Optimization and Reviewing Linear Methods (solution)
Discussion 7: Backpropagation and Computation Graphs (solution)
Discussion 8: QDA and Logistic Regression (solution)
Discussion 9: EM (solution)
Discussion 10: SVMs and KNN (solution)
Discussion 11: Decision Trees (solution)
Discussion 12: LASSO, Sparsity, Feature Selection, Auto-ML (solution)
講義下載鏈接://pan.baidu.com/s/19Zmws53BUzjSvaDMEiUhqQ 密碼:u2xs
機器學習使用來自各種數學領域的工具。本文件試圖提供一個概括性的數學背景,需要在入門類的機器學習,這是在加州大學伯克利分校被稱為CS 189/289A。
//people.eecs.berkeley.edu/~jrs/189/
我們的假設是讀者已經熟悉多變量微積分和線性代數的基本概念(達到UCB數學53/54的水平)。我們強調,本文檔不是對必備類的替代。這里介紹的大多數主題涉及的很少;我們打算給出一個概述,并指出感興趣的讀者更全面的理解進一步的細節。
請注意,本文檔關注的是機器學習的數學背景,而不是機器學習本身。我們將不討論特定的機器學習模型或算法,除非可能順便強調一個數學概念的相關性。
這份文件的早期版本不包括校樣。我們已經開始在一些證據中加入一些比較簡短并且有助于理解的證據。這些證明不是cs189的必要背景,但可以用來加深讀者的理解。
目錄: -通用符號
機器學習是學習數據和經驗的算法的研究。它被廣泛應用于各種應用領域,從醫學到廣告,從軍事到行人。任何需要理解數據的領域都是機器學習的潛在的消費者。《A Course in Machine Learning》屬于入門級資料,它涵蓋了現代機器學習的大多數主要方面(監督學習,無監督學習,大間隔方法,概率建模,學習理論等)。它的重點是具有嚴格基礎的廣泛應用。
機器學習是一個廣闊而迷人的領域。即使在今天,機器學習技術仍然在你的生活中占據了相當大的一部分,而且常常是在你不知情的情況下。在某種程度上,任何看似合理的人工智能方法都必須包括學習,如果不是為了別的原因,而是因為如果一個系統不能學習,那么它就很難被稱為智能系統。機器學習本身也很吸引人,因為它提出了關于學習和成功完成任務的意義的哲學問題。
同時,機器學習也是一個非常廣泛的領域,試圖涵蓋所有領域對于教學來說將是一場災難。因為它發展得如此之快,以至于任何試圖報道最新發展的書籍在上線之前都會過時。因此,本書有兩個目標。首先,要通俗地介紹一個非常深的領域是什么。第二,為讀者提供必要的技能,以便在新技術發展過程中掌握新技術。
由Marc Peter Deisenroth,A Aldo Faisal和Cheng Soon Ong撰寫的《機器學習數學基礎》“Mathematics for Machine Learning” 最新版417頁pdf版本已經放出,作者表示撰寫這本書旨在激勵人們學習數學概念。這本書并不打算涵蓋前沿的機器學習技術,因為已經有很多書這樣做了。相反,作者的目標是通過該書提供閱讀其他書籍所需的數學基礎。這本書分為兩部分:數學基礎知識和使用數學基礎知識進行機器學習算法示例。值得初學者收藏和學習!
*《Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs》A Jolicoeur-Martineau, I Mitliagkas [Mila] (2019)
The tutorial is written for those who would like an introduction to reinforcement learning (RL). The aim is to provide an intuitive presentation of the ideas rather than concentrate on the deeper mathematics underlying the topic. RL is generally used to solve the so-called Markov decision problem (MDP). In other words, the problem that you are attempting to solve with RL should be an MDP or its variant. The theory of RL relies on dynamic programming (DP) and artificial intelligence (AI). We will begin with a quick description of MDPs. We will discuss what we mean by “complex” and “large-scale” MDPs. Then we will explain why RL is needed to solve complex and large-scale MDPs. The semi-Markov decision problem (SMDP) will also be covered.
The tutorial is meant to serve as an introduction to these topics and is based mostly on the book: “Simulation-based optimization: Parametric Optimization techniques and reinforcement learning” [4]. The book discusses this topic in greater detail in the context of simulators. There are at least two other textbooks that I would recommend you to read: (i) Neuro-dynamic programming [2] (lots of details on convergence analysis) and (ii) Reinforcement Learning: An Introduction [11] (lots of details on underlying AI concepts). A more recent tutorial on this topic is [8]. This tutorial has 2 sections: ? Section 2 discusses MDPs and SMDPs. ? Section 3 discusses RL. By the end of this tutorial, you should be able to ? Identify problem structures that can be set up as MDPs / SMDPs. ? Use some RL algorithms.