Artificial Intelligence History

Computer Is World Chess Champion – 1997 AD

Return to Timeline of the History of Computers


Computer Is World Chess Champion

Garry Kasparov (b. 1963)

“Ever since Alan Turing wrote the first computer chess program in 1950, computer scientists (and the general public) had viewed proficiency at chess as a litmus test for machines’ intelligence. Machines, the thinking went, would be truly intelligent if they could beat a human at chess. When that happened, the challenge then subtly changed: would computers ever be able to beat every human at chess, even a grand master?

That happened nearly 50 years later in 1996, when IBM’s Deep Blue computer beat world chess champion Garry Kasparov.

Kasparov and Deep Blue played two matches—the first took place in February 1996 in Philadelphia. Kasparov lost two games to Deep Blue but still won the match. The rematch occurred a year later in May 1997, when Kasparov lost to Deep Blue with a final score of 3.5 to 2.5 (one game was a draw). In an unusual twist, Deep Blue made an unexpected play during game two of the second match, rattling Kasparov and throwing him off his strategy. Kasparov did not know what to make of the move and considered it a sign of superior intelligence. While counterintuitive, Kasparov’s interpretation of Deep Blue’s capabilities highlights the power and weakness of relying on human intuition when playing games of skill.

In fact, Deep Blue’s advantage was brute force, pure and simple. Deep Blue was really a massively parallel program coded in C, running on a UNIX cluster, and capable of computing 200 million possible board positions each second. Deep Blue’s “evaluative function,” which decided which board positions were better, was based on assessing four human-programmed variables: material, the value of each piece; position, the number of squares that buffer a player’s piece from attack; king safety, a number that represents how safe the king is, given his location on the board and the position of the other pieces; and tempo, the success of a player advancing his or her position over time. Given these factors and the relatively constrained size of the board, chess became a “quantifiable” equation for Deep Blue. As such, the computer can win by simply seeking the best board positions—something it can do faster, and better, than any human.”

SEE ALSO Computer Beats Master at Go (2016)

Viewers watch world chess champion Garry Kasparov on a television monitor at the start of the sixth and final match against IBM’s Deep Blue computer in New York.

Fair Use Sources: B07C2NQSPV

Artificial Intelligence Cloud History Software Engineering

DENDRAL Artificial Intelligence (AI) Research Project – 1965 AD

Return to Timeline of the History of Computers



Joshua Lederberg (1925–2008), Bruce G. Buchanan (dates unavailable), Edward Feigenbaum (b. 1936), Carl Djerassi (1923–2015)

DENDRAL was an early influential computer research project in the development of modern AI systems. It helped to shift the focus of AI research from developing general intelligence to creating systems tailored for specific areas. It did this by representing experts’ knowledge of chemistry in a way that could be used by a computer, allowing the system of code and data to solve narrowly defined chemistry problems and draw conclusions the same way that a human expert might, thus earning it the name “expert system.”

DENDRAL started in 1965, when geneticist Joshua Lederberg was looking for a computer-based research platform to further his understanding of organic compounds in support of his exobiology research—a branch of astrobiology that seeks to understand the evolution of life on other planets. Lederberg enlisted the partnerships of Stanford assistant professor Edward Feigenbaum, one of the founders of the school’s computer science department, Stanford chemist Carl Djerassi, and virtuoso AI programmer Bruce Buchanan to develop a system that could suggest chemical structures and the mass spectra that might comprise them. The project unfolded over the course of approximately 15 years, evolving a program designed to model scientific reasoning and explain experimental chemistry into a system that chemists could use to generate hypotheses and, eventually, to learn new things about chemistry.

By the end it resulted in two main components—Heuristic DENDRAL and Meta-DENDRAL. Heuristic DENDRAL aggregated existing data from different sources (such as the experts’ core knowledge base of chemistry) and produced the sets of chemical structures and their potentially corresponding mass spectra. Meta-DENDRAL was the learning side of the house. This program took the output of Heuristic DENDRAL and produced sets of hypotheses that might explain the correlation between chemical structures and the combinations of mass spectra that might be associated with them. For his work on DENDRAL, Edward Feigenbaum was awarded the 1994 A.M. Turing Award.

SEE ALSO AI Medical Diagnosis (1975)

Joshua Lederberg in front of exobiology equipment at Stanford.

Fair Use Source: B07C2NQSPV

History Software Engineering

Turing Test of Artificial Intelligence (AI) – 1951 AD

Return to Timeline of the History of Computers


The Turing Test

Alan Turing (1912–1954)

““Can machines think?” That’s the question Alan Turing asked in his 1951 paper, “Computing Machinery and Intelligence.” Turing envisioned a day when computers would have as much storage and complexity as a human brain. When computers had so much storage, he reasoned, it should be possible to program such a wide range of facts and responses that a machine might appear to be intelligent. How, then, Turing asked, could a person know if a machine was truly intelligent, or merely presenting as such?

Turing’s solution was to devise a test of machine intelligence. The mark of intelligence, Turing argued, was not the ability to multiply large numbers or play chess, but to engage in a natural conversation with another intelligent being.

In Turing’s test, a human, playing the role of an interrogator, is able to communicate (in what we would now call a chat room) with two other entities: another human and a computer. The interrogator’s job is to distinguish the human from the computer; the computer’s goal is to convince the interrogator that it is a person, and that the other person is merely a simulation of intelligence. If a computer could pass such a test, Turing wrote, then there would be as much reason to assume that it was conscious as there would be to assume that any human was conscious. According to Turing, the easiest way to create a computer that could pass his test would be to build one that could learn and then teach it from “birth” as if it were a child.

In the years that followed, programs called chatbots, capable of conducting conversations, appeared to pass the test by fooling unsuspecting humans into thinking they were intelligent. The first of these, ELIZA, was invented in 1966 by MIT professor Joseph Weizenbaum (1923–2008). In one case, ELIZA was left running on a teletype, and a visitor to Weizenbaum’s office thought he was text-chatting with Weizenbaum at his home office, rather than with an artificial intelligence (AI) program. According to experts, however, ELIZA didn’t pass the Turing test because the visitor wasn’t told in advance that the “person” at the other end of the teleprinter might be a computer.

SEE ALSO ELIZA (1965), Computer Is World Chess Champion (1997), Computer Beats Master at Game of Go (2016)

“In the movie Blade Runner, starring Harrison Ford, the fictional Voight-Kampff test can distinguish a human from a “replicant” by measuring eye dilation during a stressful conversation.”

Fair Use Source: B07C2NQSPV

Artificial Intelligence Cloud Data Science - Big Data DevOps

AIOps (Artificial Intelligence for IT Operations)

AIOps (artificial intelligence for IT operations) – “AIOps is an umbrella term for the use of big data analytics, machine learning and other AI technologies to automate the identification and resolution of common IT issues.”

Fair Use Source: 809137