人工智能中对机器学非常简要的介绍
Very Brief Introduction to Machine Learning for AI?
The topics summarized here are covered in these?slides.
本主題總結的內容包含在這些幻燈片中。
Intelligence(智能)
The notion of?intelligence?can be defined in many ways. Here we define it as the ability to take the?right decisions, according to some criterion (e.g. survival and reproduction, for most animals). To take better decisions requires?knowledge, in a form that is?operational, i.e., can be used to interpret sensory data and use that information to take decisions.
智能的概念可以用很多種方式來定義。本文中,我們把他定義為參照某些標準(例如, 大多數動物的生存和繁殖)做出正確決策的能力。要做出更好的決策,需要可操作形式的知識的支撐,例如,可以用于轉化感覺數據,并使用轉化的信息來決定。
Artificial Intelligence(人工智能)
Computers already possess some intelligence thanks to all the programs that humans have crafted and which allow them to “do things” that we consider useful (and that is basically what we mean for a computer to take the right decisions). But there are many tasks which animals and humans are able to do rather easily but remain out of reach of computers, at the beginning of the 21st century. Many of these tasks fall under the label of Artificial Intelligence, and include many perception and control tasks. Why is it that we have failed to write programs for these tasks? I believe that it is mostly because we do not know explicitly (formally) how to do these tasks, even though our brain (coupled with a body) can do them. Doing those tasks involve knowledge that is currently implicit, but we have information about those tasks through data and examples (e.g. observations of what a human would do given a particular request or input). How do we get machines to acquire that kind of intelligence? Using data and examples to build operational knowledge is what learning is about.
基于人類編寫的讓計算機“做一些事情”的代碼,計算機已經可以做一些我們認為有意義的只能的事情(那基本上是我們讓一臺電腦做正確決策的意思)。但是,在第二十一世紀初,仍然有很多事情人類和動物可以很容易地完成,但是計算機卻不能完成。Many of these tasks fall under the label of Artificial Intelligence,包括許多感知和控制任務。為什么我們寫不成代碼來完成這些任務呢?我覺得主要是因為我們還沒有清楚(正式)的知道如何做這些事情,雖然我們有一個大腦(加上一個身軀)可以完成他們。完成這些事情需要一些目前隱含存在的知識的支撐,but we have information about those tasks through data and examples(如,觀察在給出特定的需求或者輸入的時候,一個人會做什么)。我們怎樣讓機器來獲得這種智能呢?用數據和實例來構建可操作的知識就學習要做的事情。
Machine Learning(機器學習)
Machine learning has a long history and numerous textbooks have been written that do a good job of covering its main principles. Among the recent ones I suggest:
機器學習的歷史非常長遠,已經有非常多的不錯的書包含了機器學習的主要原理。建議讀以下書籍:
- Chris Bishop, “Pattern Recognition and Machine Learning”, 2007
- 模式識別和機器學習
- Simon Haykin, “Neural Networks: a Comprehensive Foundation”, 2009 (3rd edition)
- ?神經網絡: 綜合基礎?
- Richard O. Duda, Peter E. Hart and David G. Stork, “Pattern Classification”, 2001 (2nd edition)
- 模式分類
Here we focus on a few concepts that are most relevant to this course.
下面就本文相關的一些主要概念做解釋。
Formalization of Learning(形式化學習 ?)
First, let us formalize the most common mathematical framework for learning. We are given training examples
with the??being examples sampled from an?unknown?process?. We are also given a loss functional??which takes as argument a decision function??and an example?, and returns a real-valued scalar. We want to minimize the expected value of??under the unknown generating process?.
首先,讓我們形式化機器學習中最常見的計算框架。我們給出訓練實例
其中 為未知過程的一個樣本。我們給出作為決策函數的參數的損失函數 , 一個樣本 , 和一個實值的標量返回值。 我們希望最小化在未知產生函數下的期望值Supervised Learning(監督式學習)
In supervised learning, each examples is an (input,target) pair:??and??takes an??as argument. The most common examples are
在監督是學習中,每一個樣本個是一個(輸入,目標)對偶:, 為 的參數。最常見的例子如下
- regression:??is a real-valued scalar or vector, the output of??is in the same set of values as?, and we often take as loss functional the squared error
- 回歸:?是一個實值的標量或者向量, the output of??is in the same set of values as?, 通常取平方誤差為損失函數。
-
classification(分類):??is a finite integer (e.g. a symbol) corresponding to a class index, and we often take as loss function the negative conditional log-likelihood, with the interpretation that??estimates?:
where we have the constraints(這里的約束為)
Unsupervised Learning(非監督式學習)
In unsupervised learning we are learning a function??which helps to characterize the unknown distribution?. Sometimes??is directly an estimator of??itself (this is called density estimation). In many other cases??is an attempt to characterize where the density concentrates. Clustering algorithms divide up the input space in regions (often centered around a prototype example or centroid ). Some clustering algorithms create a hard partition (e.g. the k-means algorithm) while others construct a soft partition (e.g. a Gaussian mixture model) which assign to each??a probability of belonging to each cluster. Another kind of unsupervised learning algorithms are those that construct a new representation for?. Many deep learning algorithms fall in this category, and so does Principal Components Analysis.
在無監督學習中,我們要學習一個函數? 來描述未知分布?。通常?是對?本身的一個估計(密度估計)。在許多其他情況下,? 嘗試描述哪里是密度中心。聚類算法按照區域(通常圍繞一個原始的樣本的或質心)劃分輸入空間。一些聚類算法創建一個硬劃分(如,k-均值算法),而其他構建一個軟劃分(如高斯混合模型),并分配給每個?一個概率表示屬于每個聚簇的可能性。另一類無監督的學習算法是一類構造? 的新表示的算法,許多深度學習算法屬于這一類,另外主成分分析(PCA)也是。
Local Generalization(局部泛化)
The vast majority of learning algorithms exploit a single principle for achieving generalization: local generalization. It assumes that if input example??is close to input example?, then the corresponding outputs??and??should also be close. This is basically the principle used to perform local interpolation. This principle is very powerful, but it has limitations: what if we have to extrapolate? or equivalently, what if the target unknown function has many more variations than the number of training examples? in that case there is no way that local generalization will work, because we need at least as many examples as there are ups and downs of the target function, in order to cover those variations and be able to generalize by this principle. This issue is deeply connected to the so-called?curse of dimensionality?for the following reason. When the input space is high-dimensional, it is easy for it to have a number of variations of interest that is exponential in the number of input dimensions. For example, imagine that we want to distinguish between 10 different values of each input variable (each element of the input vector), and that we care about about all the??configurations of these??variables. Using only local generalization, we need to see at least one example of each of these??configurations in order to be able to generalize to all of them.
Distributed versus Local Representation and Non-Local Generalization(分布式 VS 局部表示和非局部泛化)
A simple-minded binary local representation of integer??is a sequence of??bits such that?, and all bits are 0 except the?-th one. A simple-minded binary distributed representation of integer??is a sequence of??bits with the usual binary encoding for?. In this example we see that distributed representations can be exponentially more efficient than local ones. In general, for learning algorithms, distributed representations have the potential to capture exponentially more variations than local ones for the same number of free parameters. They hence offer the potential for better generalization because learning theory shows that the number of examples needed (to achieve a desired degree of generalization performance) to tune??effective degrees of freedom is?.
Another illustration of the difference between distributed and local representation (and corresponding local and non-local generalization) is with (traditional) clustering versus Principal Component Analysis (PCA) or Restricted Boltzmann Machines (RBMs). The former is local while the latter is distributed. With k-means clustering we maintain a vector of parameters for each prototype, i.e., one for each of the regions distinguishable by the learner. With PCA we represent the distribution by keeping track of its major directions of variations. Now imagine a simplified interpretation of PCA in which we care mostly, for each direction of variation, whether the projection of the data in that direction is above or below a threshold. With??directions, we can thus distinguish between??regions. RBMs are similar in that they define??hyper-planes and associate a bit to an indicator of being on one side or the other of each hyper-plane. An RBM therefore associates one input region to each configuration of the representation bits (these bits are called the hidden units, in neural network parlance). The number of parameters of the RBM is roughly equal to the number these bits times the input dimension. Again, we see that the number of regions representable by an RBM or a PCA (distributed representation) can grow exponentially in the number of parameters, whereas the number of regions representable by traditional clustering (e.g. k-means or Gaussian mixture, local representation) grows only linearly with the number of parameters. Another way to look at this is to realize that an RBM can generalize to a new region corresponding to a configuration of its hidden unit bits for which no example was seen, something not possible for clustering algorithms (except in the trivial sense of locally generalizing to that new regions what has been learned for the nearby regions for which examples have been seen).
總結
以上是生活随笔為你收集整理的人工智能中对机器学非常简要的介绍的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 聊聊 Linux 中的五种 IO 模型
- 下一篇: 深度学习算法简介