2017-11-07
人工智能发展史
1955之前
- 1799 1805 最小二乘法
- 1936 图灵机
- 1950 图灵测试
- 1952 机器学习
1955
- 1950~1960 线性回归
- 1955 冯诺依曼去世
- 1956 一场在美国达特茅斯(Dartmouth)大学召开的学术会议
- 1957 罗森·布拉特基于神经感知科学背景提出了第二模型,非常的类似于今天的机器学习模型 perceptron感知机
- 1960 最小二乘法
1965——基于逻辑表示的“符号主义”学习技术
- 多值逻辑
- 计算机能证明逻辑
- 1967 kNN
- 1969 XOR
20世纪60年代中叶到70年代末
停滞不前的冷静时期
- 1980 在美国的卡内基梅隆大学(CMU)召开了第一届机器学习国际研讨会
二十世纪八十年代-符号主义学习
理夏德米哈尔斯基
汤姆 米切尔
杰米 卡博内尔
代表包括决策树和基于逻辑的学习。但由于学习过程面临的假设空间太大、复杂度极高,因此,问题规模稍大就难以有效进行学习,九十年代中期后这方面的研究相对陷入低潮;
1985
- 模糊逻辑应用于工业、交通
- 1986 神经网络雄起
- 遗传算法
二十世纪九十年代中期之前-基于神经网络的(联结)连接学习
神经元模型1943
连接主义在二十世纪五十年代取得了大发展,同时也遇到了很大的障碍,如图灵得主M.Minsky和S.Papert在1969年指出,(当时的)神经网络只能处理线性分类,甚至对“异或”这么简单的问题都处理不了。1983年,J.J Hopfield利用神经网络求解“流动推销员问题”这个著名的NP难题取得重大进展,使得连接注意重新受到人们关注。1986年,D.E. Rumelhart等人重新发明了著名的BP(反向传播)算法,产生了深远影响。神经网络学习过程涉及大量参数,而参数的设置缺乏理论指导,主要靠手工“调参”;夸张一点说,参数调节上失之毫厘,学习结果可能谬以千里。
二十世纪九十年代中期-统计机器学习
代表性技术是支持向量机以及更一般的核方法。这方面的研究早在二十世纪六七十年代就已开始。统计学习与连接主义学习有密切的联系。支持向量机被普遍接受后,核技巧(kernel rick)被人们用到了机器学习的几乎每个角落,核方法也逐渐成为机器学习的基本内容之一。
二十一世纪初-深度学习
深度学习,狭义的说就是“很多层”的神经网络。深度学习虽然缺乏严格的理论基础,但它显著的降低了机器学习应用者的门槛,为机器学习技术走向工程实践带来了便利。深度学习火热有两个基本原因:数据大了、计算能力强了。深度学习模型拥有大量数据,若数据样本少,则很容易“过拟合”;如此复杂的模型,如此大的数据样本,若缺乏强力计算设备,根本无法求解。
以下内容来自Wikipedia
Overview
Decade | Summary |
---|---|
<1950s | Statistical methods are discovered and refined. |
1950s | Pioneering machine learning research is conducted using simple algorithms. |
1960s | Bayesian methods are introduced for probabilistic inference in machine learning.[1] |
1970s | ‘AI Winter‘ caused by pessimism about machine learning effectiveness. |
1980s | Rediscovery of backpropagation causes a resurgence in machine learning research. |
1990s | Work on machine learning shifts from a knowledge-driven approach to a data-driven approach. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions – or “learn” – from the results.[2] Support vector machines (SVMs) and [3]recurrent neural networks (RNNs) become popular. The fields of [4]computational complexity via neural networks and super-Turing computation started. |
2000s | Support Vector Clustering [5] and other Kernel methods [6] and unsupervised machine learning methods become widespread.[7] |
2010s | Deep learning becomes feasible, which leads to machine learning becoming integral to many widely used software services and applications. |
Timeline
Year | Event type | Caption | Event |
---|---|---|---|
1763 | Discovery | The Underpinnings of Bayes’ Theorem | Thomas Bayes‘s work An Essay towards solving a Problem in the Doctrine of Chances is published two years after his death, having been amended and edited by a friend of Bayes, Richard Price.[8] The essay presents work which underpins Bayes theorem. |
1805 | Discovery | Least Squares | Adrien-Marie Legendre describes the “méthode des moindres carrés”, known in English as the least squares method.[9] The least squares method is used widely in data fitting. |
1812 | Bayes’ Theorem | Pierre-Simon Laplace publishes Théorie Analytique des Probabilités, in which he expands upon the work of Bayes and defines what is now known as Bayes’ Theorem.[10] | |
1913 | Discovery | Markov Chains | Andrey Markov first describes techniques he used to analyse a poem. The techniques later become known as Markov chains.[11] |
1950 | Turing’s Learning Machine | Alan Turing proposes a ‘learning machine’ that could learn and become artificially intelligent. Turing’s specific proposal foreshadows genetic algorithms.[12] | |
1951 | First Neural Network Machine | Marvin Minsky and Dean Edmonds build the first neural network machine, able to learn, the SNARC.[13] | |
1952 | Machines Playing Checkers | Arthur Samuel joins IBM’s Poughkeepsie Laboratory and begins working on some of the very first machine learning programs, first creating programs that play checkers.[14] | |
1957 | Discovery | Perceptron | Frank Rosenblatt invents the perceptron while working at the Cornell Aeronautical Laboratory.[15] The invention of the perceptron generated a great deal of excitement and was widely covered in the media.[16] |
1963 | Achievement | Machines Playing Tic-Tac-Toe | Donald Michie creates a ‘machine’ consisting of 304 match boxes and beads, which uses reinforcement learning to play Tic-tac-toe (also known as noughts and crosses).[17] |
1967 | Nearest Neighbor | The nearest neighbor algorithm was created, which is the start of basic pattern recognition. The algorithm was used to map routes.[2] | |
1969 | Limitations of Neural Networks | Marvin Minsky and Seymour Papert publish their book Perceptrons, describing some of the limitations of perceptrons and neural networks. The interpretation that the book shows that neural networks are fundamentally limited is seen as a hindrance for research into neural networks.[18][19] | |
1970 | Automatic Differentation (Backpropagation) | Seppo Linnainmaa publishes the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.[20][21] This corresponds to the modern version of backpropagation, but is not yet named as such.[22][23][24][25] | |
1972 | Discovery | Term frequency–inverse document frequency (TF-IDF) | Karen Spärck Jones publishes the concept of TF-IDF, a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus.[26] 83% of text-based recommender systems in the domain of digital libraries use tf-idf.[27] |
1979 | Stanford Cart | Students at Stanford University develop a cart that can navigate and avoid obstacles in a room.[2] | |
1980 | Discovery | Neocognitron | Kunihiko Fukushima first publishes his work on the neocognitron, a type of artificial neural network (ANN).[28] Neocognition later inspires convolutional neural networks (CNNs).[29] |
1981 | Explanation Based Learning | Gerald Dejong introduces Explanation Based Learning, where a computer algorithm analyses data and creates a general rule it can follow and discard unimportant data.[2] | |
1982 | Discovery | Recurrent Neural Network | John Hopfield popularizes Hopfield networks, a type of recurrent neural network that can serve as content-addressable memory systems.[30] |
1985 | NetTalk | A program that learns to pronounce words the same way a baby does, is developed by Terry Sejnowski.[2] | |
1986 | Discovery | Backpropagation | The process of backpropagation is described by David Rumelhart, Geoff Hinton and Ronald J. Williams.[31] |
1989 | Discovery | Reinforcement Learning | Christopher Watkins develops Q-learning, which greatly improves the practicality and feasibility of reinforcement learning.[32] |
1989 | Commercialization | Commercialization of Machine Learning on Personal Computers | Axcelis, Inc. releases Evolver, the first software package to commercialize the use of genetic algorithms on personal computers.[33] |
1992 | Achievement | Machines Playing Backgammon | Gerald Tesauro develops TD-Gammon, a computer backgammon program that uses an artificial neural network trained using temporal-difference learning (hence the ‘TD’ in the name). TD-Gammon is able to rival, but not consistently surpass, the abilities of top human backgammon players.[34] |
1995 | Discovery | Random Forest Algorithm | Tin Kam Ho publishes a paper describing random decision forests.[35] |
1995 | Discovery | Support Vector Machines | Corinna Cortes and Vladimir Vapnik publish their work on support vector machines.[36][37] |
1997 | Achievement | IBM Deep Blue Beats Kasparov | IBM’s Deep Blue beats the world champion at chess.[2] |
1997 | Discovery | LSTM | Sepp Hochreiter and Jürgen Schmidhuber invent long short-term memory (LSTM) recurrent neural networks,[38] greatly improving the efficiency and practicality of recurrent neural networks. |
1998 | MNIST database | A team led by Yann LeCun releases the MNIST database, a dataset comprising a mix of handwritten digits from American Census Bureau employees and American high school students.[39] The MNIST database has since become a benchmark for evaluating handwriting recognition. | |
2002 | Torch Machine Learning Library | Torch, a software library for machine learning, is first released.[40] | |
2006 | The Netflix Prize | The Netflix Prize competition is launched by Netflix. The aim of the competition was to use machine learning to beat Netflix’s own recommendation software’s accuracy in predicting a user’s rating for a film given their ratings for previous films by at least 10%.[41] The prize was won in 2009. | |
2009 | Achievement | ImageNet | ImageNet is created. ImageNet is a large visual database envisioned by Fei-Fei Li from Stanford University, who realized that the best machine learning algorithms wouldn’t work well if the data didn’t reflect the real world.[42] For many, ImageNet was the catalyst for the AI boom[43]of the 21st century. |
2010 | Kaggle Competition | Kaggle, a website that serves as a platform for machine learning competitions, is launched.[44] | |
2011 | Achievement | Beating Humans in Jeopardy | Using a combination of machine learning, natural language processing and information retrieval techniques, IBM‘s Watson beats two human champions in a Jeopardy! competition.[45] |
2012 | Achievement | Recognizing Cats on YouTube | The Google Brain team, led by Andrew Ng and Jeff Dean, create a neural network that learns to recognize cats by watching unlabeled images taken from frames of YouTube videos.[46][47] |
2014 | Leap in Face Recognition | Facebook researchers publish their work on DeepFace, a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.[48] | |
2014 | Sibyl | Researchers from Google detail their work on Sibyl,[49] a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.[50] | |
2016 | Achievement | Beating Humans in Go | Google’s AlphaGo program becomes the first Computer Go program to beat an unhandicapped professional human player[51] using a combination of machine learning and tree search techniques.[52] Later improved as AlphaGo Zero and then in 2017 generalized to Chess and more two-player games with AlphaZero. |
引用
- https://en.wikipedia.org/wiki/History_of_artificial_intelligence
- https://en.wikipedia.org/wiki/Timeline_of_machine_learning
- 6段Python代码刻画深度学习历史:从最小二乘法到深度神经网络
- https://zhuanlan.zhihu.com/p/33089702
- https://www.leiphone.com/news/201609/jzfVzJ49LIWJegku.html
dzzxjl