Artificial Intelligence History Software Engineering

Artificial Intelligence (AI) Coined – 1955 AD

Return to Timeline of the History of Computers


Artificial Intelligence Coined

John McCarthy (1927–2011), Marvin Minsky (1927–2016), Nathaniel Rochester (1919–2001), Claude E. Shannon (1916–2001)

“Artificial intelligence (AI) is the science of computers doing things that normally require human intelligence to accomplish. The term was coined in 1955 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in their proposal for the “Dartmouth Summer Research Project on Artificial Intelligence,” a two-month, 10-person institute that was held at Dartmouth College during the summer of 1956.

Today we consider the authors of the proposal to be the “founding fathers” of AI. Their primary interest was to lay the groundwork for a future generation of machines that would use abstraction to let them mirror the way humans think. So the founding fathers set off a myriad of different research projects, including attempts to understand written language, solve logic problems, describe visual scenes, and pretty much replicate anything that a human brain could do.

The term artificial intelligence has gone in and out of vogue over the years, with people interpreting the concept in different ways. Computer scientists defined the term as describing academic pursuits such as computer vision, robotics, and planning, whereas the public—and popular culture—has tended to focus on science-fiction applications such as machine cognition and self-awareness. On Star Trek (“The Ultimate Computer,” 1968), the AI-based M5 computer could run a starship without a human crew—and then quickly went berserk and started destroying other starships during a training exercise. The Terminator movies presented Skynet as a global AI network bent on destroying all of humanity.

Only recently has AI come to be accepted in the public lexicon as a legitimate technology with practical applications. The reason is the success of narrowly focused AI systems that have outperformed humans at tasks that require exceptional human intelligence. Today AI is divided into many subfields, including machine learning, natural language processing, neural networks, deep learning, and others. For their work on AI, Minsky was awarded the A.M. Turing award in 1969, and McCarty in 1971.”

SEE ALSO Rossum’s Universal Robots (1920), Metropolis (1927), Isaac Asimov’s Three Laws of Robotics (1942), HAL 9000 Computer (1968), Japan’s Fifth Generation Computer Systems (1981), Home-Cleaning Robot (2002), Artificial General Intelligence (AGI) (~2050), The Limits of Computation? (~9999)

“Artificial intelligence allows computers to do things that normally require human intelligence, such as recognizing patterns, classifying objects, and learning.”

Fair Use Source: B07C2NQSPV

Artificial Intelligence History Software Engineering

Computer Speech Recognition – 1952 AD

Return to Timeline of the History of Computers


Computer Speech Recognition

“The automatic digit recognition system, also known as Audrey, was developed by Bell Labs in 1952. Audrey was a milestone in the quest to enable computers to recognize and respond to human speech.

Audrey was designed to recognize the spoken digits 0 through 9 and provide feedback with a series of flashing lights associated with a specific digit. Audrey’s accuracy was speaker dependent, because to work, it first had to “learn” the unique sounds emitted by an individual person for reference material. Audrey’s accuracy was around 80 percent with one designer’s voice. Speaker-independent recognition would not be invented for many more years, with modern examples being Amazon Echo with Alexa and Apple Siri.

To create the reference material, the speaker would slowly recite the digits 0 through 9 into an everyday telephone, pausing at least 350 milliseconds between each number. The sounds were then sorted into electrical classes and stored in analog memory. The pauses were needed because at the time, speech-recognition systems had not solved coarticulation—the phenomenon of speakers phonetically linking words as they naturally morph from one to another. That is, it was easier for the system to isolate and recognize individual words than words said together.

Once trained, Audrey could match new spoken digits with the sounds stored in its memory: the computer would flash a light corresponding to a particular digit when it found a match.

While various economic and technical practicalities prevented Audrey from going into production (including specialized hardwired circuitry and large power consumption), Audrey was nevertheless an important building block in advancing speech recognition. Audrey showed that the technique could be used in theory to automate speaker input for things such as account numbers, Social Security numbers, and other kinds of numerical information.

Ten years later, IBM demonstrated the “Shoebox,” a machine capable of recognizing 16 spoken words, at the 1962 World’s Fair in Seattle, Washington.”

SEE ALSO Electronic Speech Synthesis (1928)

“The automatic digit recognition system was the forerunner of many popular applications today, including smartphones that can recognize voice commands.”

Fair Use Source: B07C2NQSPV

History Software Engineering

Turing Test of Artificial Intelligence (AI) – 1951 AD

Return to Timeline of the History of Computers


The Turing Test

Alan Turing (1912–1954)

““Can machines think?” That’s the question Alan Turing asked in his 1951 paper, “Computing Machinery and Intelligence.” Turing envisioned a day when computers would have as much storage and complexity as a human brain. When computers had so much storage, he reasoned, it should be possible to program such a wide range of facts and responses that a machine might appear to be intelligent. How, then, Turing asked, could a person know if a machine was truly intelligent, or merely presenting as such?

Turing’s solution was to devise a test of machine intelligence. The mark of intelligence, Turing argued, was not the ability to multiply large numbers or play chess, but to engage in a natural conversation with another intelligent being.

In Turing’s test, a human, playing the role of an interrogator, is able to communicate (in what we would now call a chat room) with two other entities: another human and a computer. The interrogator’s job is to distinguish the human from the computer; the computer’s goal is to convince the interrogator that it is a person, and that the other person is merely a simulation of intelligence. If a computer could pass such a test, Turing wrote, then there would be as much reason to assume that it was conscious as there would be to assume that any human was conscious. According to Turing, the easiest way to create a computer that could pass his test would be to build one that could learn and then teach it from “birth” as if it were a child.

In the years that followed, programs called chatbots, capable of conducting conversations, appeared to pass the test by fooling unsuspecting humans into thinking they were intelligent. The first of these, ELIZA, was invented in 1966 by MIT professor Joseph Weizenbaum (1923–2008). In one case, ELIZA was left running on a teletype, and a visitor to Weizenbaum’s office thought he was text-chatting with Weizenbaum at his home office, rather than with an artificial intelligence (AI) program. According to experts, however, ELIZA didn’t pass the Turing test because the visitor wasn’t told in advance that the “person” at the other end of the teleprinter might be a computer.

SEE ALSO ELIZA (1965), Computer Is World Chess Champion (1997), Computer Beats Master at Game of Go (2016)

“In the movie Blade Runner, starring Harrison Ford, the fictional Voight-Kampff test can distinguish a human from a “replicant” by measuring eye dilation during a stressful conversation.”

Fair Use Source: B07C2NQSPV

Artificial Intelligence History

Electronic Speech Synthesis – 1928 A.D.

Return to Timeline of the History of Computers


Electronic Speech Synthesis

Homer Dudley (1896–1980)

“Long before Siri®, Alexa, Cortana, and other synthetic voices were reading emails, telling people the time, and giving driving directions, research scientists were exploring approaches to make a person’s voice take up less bandwidth as it moved through the phone system.

In 1928, Homer Dudley, an engineer at Bell Telephone Labs, developed the vocoder, a process to compress the size of human speech into intelligible electronic transmissions and create synthetic speech from scratch at the other end by imitating the sounds of the human vocal cord. The vocoder analyzes real speech and reassembles it as a simplified electronic impression of the original waveform. To recreate the sound of human speech, it uses sound from an oscillator, a gas discharge tube (for the hissing sounds), filters, and other components.

In 1939, the renamed Bell Labs unveiled the speech synthesizer at the New York World’s Fair. Called the Voder, it was manually operated by a human, who used a series of keys and foot pedals to generate the hisses, tones, and buzzes, forming vowels, consonants, and ultimately recognizable speech.

The vocoder followed a different path of technology development than the Voder. In 1939, with war having already broken out in Europe, Bell Labs and the US government became increasingly interested in developing some kind of secure voice communication. After additional research, the vocoder was modified and used in World War II as the encoder component of a highly sensitive secure voice system called SIGSALY that Winston Churchill used to speak with Franklin Roosevelt.

Then, taking a sharp turn in the 1960s, the vocoder made the leap into music and pop culture. It was and continues to be used for a variety of sounds, including electronic melodies and talking robots, as well as voice-distortion effects in traditional music. In 1961, the first computer to sing was the International Business Machines Corporation (IBM®) 7094, using a vocoder to warble the tune “Daisy Bell.” (This was the same tune that would be used seven years later by the HAL 9000 computer in Stanley Kubrick’s 2001: A Space Odyssey.) In 1995, 2Pac, Dr. Dre, and Roger Troutman used a vocoder to distort their voices in the song “California Love,” and in 1998 the Beastie Boys used a vocoded vocal in their song “Intergalactic.””

SEE ALSO “As We May Think” (1945), HAL 9000 Computer (1968)

“The Voder, exhibited by Bell Telephone at the New York World’s Fair.”

Fair Use Source: B07C2NQSPV