The History of Deep Learning starts with the 1947 book The Theory of Neural Networks by Halmos and W. D. Hill.

This book is considered to be a seminal work of the history of Deep Learning, and is considered one of the more concise guides to the architecture of a neural network (eg the factorial, autoencoder and recurrent neural networks) due to the digestibility of the code in the text.

As the brain is a complicated computationally complex system, Deep Learning researchers and users today have benefited greatly from the development of deep learning in machine learning and image recognition fields.

The History of Deep Learning contains various picture classification problems used in many different fields. Starting from unsupervised image classifications, to resizing and rescaling image classification tasks to support vector machine.

A Brief introduction to Deep Learning History
A Brief introduction to Deep Learning History

Deep Learning History (1996)

PhD thesis by Geoffrey Hinton and published in 1996 (Hinton 1998).

The central idea was that neural networks could find representations for data in the brain by building models based on functions that are taught to the network in the training process.

This was in the spirit of the digital neuron network, used in digital neuroscience, in which neurons are trained to encode various features, such as the orientation of the retina.

Another mathematical tradition was exploring the architecture of neural networks with the goal of exploiting structure in the graph (cf. Dutt 1997). Andrew Ng and Yann LeCun (2012) developed a bold, structured, early post-print approach in which each hidden layer was represented by an automaton, much like the MNIST digits

In 1998, when a deep neural network was implemented at the Massachusetts Institute of Technology in an effort to recognize handwritten characters with unparalleled accuracy.

Deep learning history has since exploded into a vibrant field of artificial intelligence.

A typical search now involves dozens of giant deep learning systems serving more than a billion searches each day, along with machine learning projects working on various other domains, like speech recognition, machine translation, and robotics.

However, it hasn’t stopped there. In the past few years, neural networks have grown very large, even in the laboratory.

A few thousand machines could now process and provide answers to a million questions a day, some with millions of questions per day.

The challenge has been to somehow scale these beasts.

  •  

And 2000s..

The History of Deep Learning continues with a presentation by Matthew Wilson of Microsoft Research, and participants learn about the approaches that exist in deep learning, their history and contributions.

Not only do they learn about the deep learning field, but they learn about what other things are possible too.

The History of Deep Learning focuses on deep learning in an enterprise environment and how different training methods, algorithms and approaches have contributed to the success of deep learning over the last decades.

The committee speaks with Peter Chen, Paul Werbos and Jaap Verheij of NVIDIA, with Anders Sjöström and Tom Demeter of Google Cloud, and with Johan Cybulski and Arnaud Gabriel of Adobe.