-
-
The research groups of Microsoft, Google, IBM and Hinton´s lab shows results of working on Neural Nets
-
By Hinton, Krizhevsky and Sutskever. They create an entry to the ILSVRC (Large Scale Visual Recognition Competition). It was the climax of deep learning ascent.
-
Navdeep Jaitly works on Google's speech recognition system. It pushed Android's speech recognition algorithm
-
Andrew Ng and Jeff Dean create Google Brain to make experiments with neural nets using a great number of CPU cores.
-
By Raina, Madhavan, Ng. It suggest taht unsupervised learning on speech recognition is 70 times faster using GPUs.
-
By Bengio et al. It has arguments to say that deep machine learning methods are more efficient for dificult problems than shallow methods
-
By Hinton, Osindero, Whye. A breakthrough significant enough to rekindle interest in neural nets.
-
-
By Hinton. It showed that Restricted Boltzmann machine can be trained in an efficient manner.
-
Introduced by Schmidhuber and Hochreifer
-
This type of machine was born in "The Wake-sleep algorithm unsupervised neural networks"
-
By Sebatian Thrun. It was a demostration of problems of TD-Gammon (reinforcement learning) approach
-
By the modern giant of deep learning Yoshua Bengio
-
AI Winter began when "Support Vector Machines" appear.
-
Was treated in the PhD thesis "" Reinforcement learning for robots using neural networks"
-
By Bengio. It explains the general failure of Recurrent Neural Nets (RNN)
-
Thank to Redford M. Neal in "Connectionist learning of belief networks". Theese nets are like Boltzman Machines but with layers
-
LeCun's CNN system is used on 10 to 20% of all the checks in U.S
-
It Mathematically proved that multilayers allow neural nets to theoretically implement any function
-
Yann LeCun et al. at AT&T Bell Labs.
-
In CMU Navlab was created "ALVINN: An autonomous land vehicle in a neural network"
-
By Waibel, Hanazawa, Hinton, Shikano, Lang. Speech Recognition close up begins with this article.
-
-
-
David Rumelhart, Geoffrey Hinton and Ronald Williams publish this paper that talks about the problems discussed about Perceptrons by Minsky
-
It was discussed in the analisys of backpropagation by Rumelhart, Hinton and Williams
-
The idea of Autoencoders is discussed in the analysis of backpropagation
-
Boltzmann Machines are networks just like neural nets and have units that are very similar to Perceptrons, theese units are stochastic, it means, they behave according to a probability distribution.
-
Paul Werbos publish about using backpropagation in Neural Nets
-
Canadian Institute For Advanced Research
-
Paul werbos proposes to use backpropagation in neural networks
-
Seppo Linnainmaa uses Backpropagation on a PC for first time
-
NO Funding on research
-
-
Marvin Minsky and Seymour Papert write "Perceptrons" explaining their nonconformism with Perceptrons in Neural Nets.
-
Bernard Widrow and Tedd Hoff demonstrate that Adaptive Linear Neurons can be implemented in electric circuits using chemical memistors.
-
Frank Rosenblatt's Perceptron Artificial Neuron Model is conceived as a simplified mathematical model that shows how neurons work
-
Marvin Minsky implement the first Hardware NN with SNARC(Stochastic NeuraL Analog Reinforcement Calculator)
-