Artificial Intelligence History Software Engineering

Artificial Intelligence (AI) Coined – 1955 AD

Return to Timeline of the History of Computers


Artificial Intelligence Coined

John McCarthy (1927–2011), Marvin Minsky (1927–2016), Nathaniel Rochester (1919–2001), Claude E. Shannon (1916–2001)

“Artificial intelligence (AI) is the science of computers doing things that normally require human intelligence to accomplish. The term was coined in 1955 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in their proposal for the “Dartmouth Summer Research Project on Artificial Intelligence,” a two-month, 10-person institute that was held at Dartmouth College during the summer of 1956.

Today we consider the authors of the proposal to be the “founding fathers” of AI. Their primary interest was to lay the groundwork for a future generation of machines that would use abstraction to let them mirror the way humans think. So the founding fathers set off a myriad of different research projects, including attempts to understand written language, solve logic problems, describe visual scenes, and pretty much replicate anything that a human brain could do.

The term artificial intelligence has gone in and out of vogue over the years, with people interpreting the concept in different ways. Computer scientists defined the term as describing academic pursuits such as computer vision, robotics, and planning, whereas the public—and popular culture—has tended to focus on science-fiction applications such as machine cognition and self-awareness. On Star Trek (“The Ultimate Computer,” 1968), the AI-based M5 computer could run a starship without a human crew—and then quickly went berserk and started destroying other starships during a training exercise. The Terminator movies presented Skynet as a global AI network bent on destroying all of humanity.

Only recently has AI come to be accepted in the public lexicon as a legitimate technology with practical applications. The reason is the success of narrowly focused AI systems that have outperformed humans at tasks that require exceptional human intelligence. Today AI is divided into many subfields, including machine learning, natural language processing, neural networks, deep learning, and others. For their work on AI, Minsky was awarded the A.M. Turing award in 1969, and McCarty in 1971.”

SEE ALSO Rossum’s Universal Robots (1920), Metropolis (1927), Isaac Asimov’s Three Laws of Robotics (1942), HAL 9000 Computer (1968), Japan’s Fifth Generation Computer Systems (1981), Home-Cleaning Robot (2002), Artificial General Intelligence (AGI) (~2050), The Limits of Computation? (~9999)

“Artificial intelligence allows computers to do things that normally require human intelligence, such as recognizing patterns, classifying objects, and learning.”

Fair Use Source: B07C2NQSPV

History Software Engineering

Error-Correcting Codes (ECC)- 1950 AD

Return to Timeline of the History of Computers


Error-Correcting Codes

Richard Hamming (1915–1998)

“After he got his PhD in mathematics and worked on the mathematical modeling for the atomic bomb, Richard Hamming took a job at Bell Telephone Labs, where he worked with Claude Shannon and John Tukey and wrote programs for the laboratory’s computers. Hamming noticed that these digital machines had to perform their calculations perfectly. But they didn’t. According to Hamming, a relay computer that Bell had built for the Aberdeen Proving Ground, a US Army facility in Maryland, had 8,900 relays and typically experienced two or three failures per day. When such a failure occurred, the entire computation would be ruined and need to be restarted from the beginning.

At the time, it was becoming popular for computer designers to devote an extra bit, called a parity bit, to detect errors when data was transmitted or stored. Hamming reasoned that if it was possible to automatically detect errors, it must also be possible to automatically correct them. He figured out how to do this and published his seminal article, “Error Detecting and Error Correcting Codes,” in the April 1950 issue of the Bell System Technical Journal.

Error-correcting codes (ECCs) play a critical role in increasing the reliability of modern computer systems. Without ECCs, whenever there is a minor error on the receipt of data, the sender must retransmit. So modern cellular data systems use ECCs to let the receiver fix those minor errors, without requesting that the sender retransmit a clean copy. Today ECCs are also used to correct errors in stored data. For example, cosmic rays can scramble the bits of a dynamic random access memory (DRAM) chip, so it’s common for internet servers to be protected with ECC memory, allowing them to automatically correct most errors resulting from stray background radiation. Compact discs (CDs) and digital video discs (DVDs) use ECC to make their playback unaffected by surface scratches. And increasingly, ECCs are being incorporated into high-performance wireless communications protocols to reduce the need for data to be resent in the event of noise.

Hamming was awarded the 1968 A.M. Turing Award “for his work on numerical methods, automatic coding systems, and error-detecting and error-correcting codes.””

SEE ALSO The Bit (1948)

“The 4-bit Hamming codes (on right) for binary numbers 00000000001 through 00000000H0 and 111H1H001 through 11111111111.”

Fair Use Source: B07C2NQSPV

History Software Engineering

The Bit – Binary Digit 0 or 1 – 1948 AD

Return to Timeline of the History of Computers


The Bit

Claude E. Shannon (1916–2001), John W. Tukey (1915–2000)

“It was the German mathematician Gottfried Wilhelm Leibniz (1646–1716) who first established the rules for performing arithmetic with binary numbers. Nearly 250 years later, Claude E. Shannon realized that a binary digit—a 0 or a 1—was the fundamental, indivisible unit of information.

Shannon earned his PhD from MIT in 1940 and then took a position at the Institute for Advanced Study in Princeton, New Jersey, where he met and collaborated with the institute’s leading mathematicians working at the intersection of computing, cryptography, and nuclear weapons, including John von Neumann, Albert Einstein, Kurt Gödel, and, for two months, Alan Turing.

In 1948, Shannon published “A Mathematical Theory of Communication” in the Bell System Technical Journal. The article was inspired in part by classified work that Shannon had done on cryptography during the war. In it, he created a mathematical definition of a generalized communications system, consisting of a message to be sent, a transmitter to convert the message into a signal, a channel through which the signal is sent, a receiver, and a destination, such as a person or a machine “for whom the message is intended.”

Shannon’s paper introduced the word bit, a binary digit, as the basic unit of information. While Shannon attributed the word to American statistician John W. Tukey, and the word had been used previously by other computing pioneers, Shannon provided a mathematical definition of a bit: rather than just a 1 or a 0, it is information that allows the receiver to limit possible decisions in the face of uncertainty. One of the implications of Shannon’s work is that every communications channel has a theoretical upper bound—a maximum number of bits that it can carry per second. As such, Shannon’s theory has been used to analyze practically every communications system ever developed—from handheld radios to satellite communications—as well as data-compression systems and even the stock market.

Shannon’s work illuminates a relationship between information and entropy, thus establishing a connection between computation and physics. Indeed, noted physicist Stephen Hawking framed much of his analysis of black holes in terms of the ability to destroy information and the problems created as a result.”

SEE ALSO Vernam Cipher (1917), Error-Correcting Codes (1950)

Mathematician and computer scientist Claude E. Shannon.

Fair Use Source: B07C2NQSPV

History Software Engineering

Boolean Algebra – 1854 A.D.

Return to Timeline of the History of Computers


Boolean Algebra

George Boole (1815–1864), Claude Shannon (1916–2001)

“George Boole was born into a shoemaker’s family in Lincolnshire, England, and schooled at home, where he learned Latin, mathematics, and science. But Boole’s family landed on hard times, and at age 16 he was forced to support his family by becoming a school teacher—a profession he would continue for the rest of his life. In 1838, he wrote his first of many papers on mathematics, and in 1849 he was appointed as the first professor of mathematics at Queen’s College in Cork, Ireland.

Today Boole is best known for his invention of mathematics for describing and reasoning about logical prepositions, what we now call Boolean logic. Boole introduced his ideas in his 1847 monograph, “The Mathematical Analysis of Logic,” and perfected them in his 1854 monograph, “An Investigation into the Laws of Thought.”

Boole’s monographs presented a general set of rules for reasoning with symbols, which today we call Boolean algebra. He created a way—and a notation—for reasoning about what is true and what is false, and how these notions combine when reasoning about complex logical systems. He is also credited with formalizing the mathematical concepts of AND, OR, and NOT, from which all logical operations on binary numbers can be derived. Today many computer languages refer to such numbers as Booleans or simply Bools in recognition of his contribution.

Boole died at the age of 49 from pneumonia. His work was carried on by other logicians but didn’t receive notice in the broader community until 1936, when Claude Shannon, then a graduate student at the Massachusetts Institute of Technology (MIT), realized that the Boolean algebra he had learned in an undergraduate philosophy class at the University of Michigan could be used to describe electrical circuits built from relays. This was a huge breakthrough, because it meant that complex relay circuits could be described and reasoned about symbolically, rather than through trial and error. Shannon’s wedding of Boolean algebra and relays let engineers discover bugs in their diagrams without having to first build the circuits, and it allowed many complex systems to be refactored, replacing them with relay systems that were functionally equivalent but had fewer components.”

SEE ALSO Binary Arithmetic (1703), Manchester SSEM (1948)

“A circuit diagram analyzed using George Boole’s “laws of thought”—what today is called Boolean algebra. Boole’s laws were used to analyze complicated telephone switching systems.”

Fair Use Source: B07C2NQSPV