Return to Timeline of the History of Computers
1969
Perceptrons
Seymour Papert (1928–2016), Marvin Minsky (1927–2016)
By the late 1940s, some computer scientists thought that the way to achieve human-level problem solving would be to create artificial neurons, borrowing the model for how the human brain works, and wire them in some kind of network. An early demonstration was the Stochastic Neural Analog Reinforcement Calculator (SNARC), a network of 40 artificial neurons that learned how to solve a maze, created in 1951 by Marvin Minsky, then a first-year graduate student at Princeton University.
Following SNARC, researchers throughout the world took up the idea of artificial neural networks. The most significant effort was at the Cornell Aeronautical Laboratory, where in 1958, Frank Rosenblatt (1928–1971) built a massive machine that “learned” how to recognize images.
Minsky, though, gave up on neural networks in the 1950s and instead pursued symbolic artificial intelligence, an approach that aims to mirror higher-level human thought by representing knowledge with symbols and rules. He moved to MIT and was joined in 1967 by Seymour Papert, an expert in the field of learning.
Annoyed by the attention (and perhaps funding) that neural networks continued to attract throughout the 1960s, Papert and Minksy wrote the book Perceptrons: An Introduction to Computational Geometry, published in 1969, in which they mathematically proved that there were fundamental limits to the artificial neural networks approach. Perceptrons was so persuasive that researchers around the world (and at many funding agencies) simply gave up on neural networks and moved on to other ideas; the book was credited with singlehandedly destroying the field of artificial neural networks.
Papert and Minsky, however, had only proved limits for a very specific kind of artificial neural network, one that had just a single layer of neurons. A few researchers who stayed with the idea eventually figured out how to efficiently train multistage neural networks, and by the 1990s, computers were finally fast enough that neural networks with multiple hidden layers were solving complex problems that could not be solved symbolically. Today, neural networks are the dominant approach used in AI.
SEE ALSO Watson Wins Jeopardy! (2011), Google Releases TensorFlow (2015), Computer Beats Master at Go (2016)
The cover of the book Perceptrons, designed by Muriel Cooper, shows a problem that is difficult to solve with a neural network: it is hard to tell if there is an unobstructed path between any two given points.