Return to Timeline of the History of Computers
Artificial Intelligence Coined
John McCarthy (1927–2011), Marvin Minsky (1927–2016), Nathaniel Rochester (1919–2001), Claude E. Shannon (1916–2001)
“Artificial intelligence (AI) is the science of computers doing things that normally require human intelligence to accomplish. The term was coined in 1955 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in their proposal for the “Dartmouth Summer Research Project on Artificial Intelligence,” a two-month, 10-person institute that was held at Dartmouth College during the summer of 1956.
Today we consider the authors of the proposal to be the “founding fathers” of AI. Their primary interest was to lay the groundwork for a future generation of machines that would use abstraction to let them mirror the way humans think. So the founding fathers set off a myriad of different research projects, including attempts to understand written language, solve logic problems, describe visual scenes, and pretty much replicate anything that a human brain could do.
The term artificial intelligence has gone in and out of vogue over the years, with people interpreting the concept in different ways. Computer scientists defined the term as describing academic pursuits such as computer vision, robotics, and planning, whereas the public—and popular culture—has tended to focus on science-fiction applications such as machine cognition and self-awareness. On Star Trek (“The Ultimate Computer,” 1968), the AI-based M5 computer could run a starship without a human crew—and then quickly went berserk and started destroying other starships during a training exercise. The Terminator movies presented Skynet as a global AI network bent on destroying all of humanity.
Only recently has AI come to be accepted in the public lexicon as a legitimate technology with practical applications. The reason is the success of narrowly focused AI systems that have outperformed humans at tasks that require exceptional human intelligence. Today AI is divided into many subfields, including machine learning, natural language processing, neural networks, deep learning, and others. For their work on AI, Minsky was awarded the A.M. Turing award in 1969, and McCarty in 1971.”
SEE ALSO Rossum’s Universal Robots (1920), Metropolis (1927), Isaac Asimov’s Three Laws of Robotics (1942), HAL 9000 Computer (1968), Japan’s Fifth Generation Computer Systems (1981), Home-Cleaning Robot (2002), Artificial General Intelligence (AGI) (~2050), The Limits of Computation? (~9999)
“Artificial intelligence allows computers to do things that normally require human intelligence, such as recognizing patterns, classifying objects, and learning.”