Categories
Artificial Intelligence Cloud History Software Engineering

Artificial General Intelligence (AGI)

Return to Timeline of the History of Computers

~2050

Artificial General Intelligence (AGI)

“The definition and metric that determines whether computers have achieved human intelligence is controversial among the AI community. Gone is the reliance on the Turing test — programs can pass the test today, and they are clearly not intelligent.

So how can we determine the presence of true intelligence? Some measure it against the ability to perform complex intellectual tasks, such as carrying out surgery or writing a best-selling novel. These tasks require an extraordinary command of natural language and, in some cases, manual dexterity. But none of these tasks require that computers be sentient or have sapience—the capacity to experience wisdom. Put another way, would human intelligence be met only if a computer could perform a task such as carrying out a conversation with a distraught individual and communicating warmth, empathy, and loving behavior—and then in turn receive feedback from the individual that stimulates those feelings within the computer as well? Is it necessary to experience emotions, rather than simulate the experience of emotions? There is no correct answer to this, nor is there a fixed definition of what constitutes “intelligence.”

The year chosen for this entry is based upon broad consensus among experts that, by 2050, many complex human tasks that do not require cognition and self-awareness in the traditional biochemical sense will have been achieved by AI. Artificial general intelligence (AGI) comes next. AGI is the term often ascribed to the state in which computers can reason and solve problems like humans do, adapting and reflecting upon decisions and potential decisions in navigating the world—kind of like how humans rely on common sense and intuition. “Narrow AI,” or “weak AI,” which we have today, is understood as computers meeting or exceeding human performance in speed, scale, and optimization in specific tasks, such as high-volume investing, traffic coordination, diagnosing disease, and playing chess, but without the cognition and emotional intelligence.

The year 2050 is based upon the expected realization of certain advances in hardware and software capacity necessary to perform computationally intense tasks as the measure of AGI. Limitations in progress thus far are also a result of limited knowledge about how the human brain functions, where thought comes from, and the role that the physical body and chemical feedback loops play in the output of what the human brain can do.”

SEE ALSO: The “Mechanical Turk” (1770), The Turing Test (1951)

Artificial general intelligence refers to the ability of computers to reason and solve problems like humans do, in a way that’s similar to how humans rely on common sense and intuition.

Fair Use Sources: B07C2NQSPV

Categories
Artificial Intelligence History

CAPTCHA – Completely Automated Public Turing test to tell Computers and Humans Apart – 2003 AD

Return to Timeline of the History of Computers

2003

CAPTCHA

“CAPTCHAs are tests administered by a computer to distinguish a human from a bot, or a piece of software that is pretending to be a person. They were created to prevent programs (more correctly, people using programs) from abusing online services that were created to be used by people. For example, companies that provide free email services to consumers sometimes use a CAPTCHA to prevent scammers from registering thousands of email addresses within a few minutes. CAPTCHAs have also been used to limit spam and restrict editing to internet social media pages.

CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. The term was coined in 2003 by computer scientists at Carnegie Mellon; however, the technique itself dates to patents filed in 1997 and 1998 by two separate teams at Sanctum, an application security company later acquired by IBM, and AltaVista that describe the technique in detail.

One clever application of CAPTCHAs is to improve and speed up the digitization of old books and other paper-based text material. The ReCAPTCHA program takes words that are illegible to OCR (Optical Character Recognition) technology when scanned and uses them as the puzzles to be retyped. Licensed to Google, this approach helps improve the accuracy of Google’s book-digitizing project by having humans provide “correct” recognition of words too fuzzy for current OCR technology. Google can then use the images and human-provided recognition as training data for further improving its automated systems.

As AI has improved, the ability of a machine to solve CAPTCHA puzzles has improved as well, creating a sort of arms race, as each side tries to improve. Different approaches have evolved over the years to create puzzles that are hard for computers but easy for people. For example, one of Google’s CAPTCHAs simply asks users to click a box that says “I am not a robot”—meanwhile, Google’s servers analyze the user’s mouse movements, examine the cookies, and even review the user’s browsing history to make sure the user is legitimate. Techniques to break or get around CAPTCHA puzzles also drive the improvement and evolution of CAPTCHA. One manual example of this is the use of “digital sweatshop workers” who type CAPTCHA solutions for human spammers, reducing the effectiveness of CAPTCHAs to limit the abuse of computer resources.”

SEE ALSO The Turing Test (1951), First Internet Spam Message (1978)

CAPTCHAs require human users to enter a series of characters or take specific actions to prove they are not robots.

Fair Use Sources: B07C2NQSPV

Categories
Artificial Intelligence History Software Engineering

Artificial Intelligence (AI) Coined – 1955 AD

Return to Timeline of the History of Computers

1955

Artificial Intelligence Coined

John McCarthy (1927–2011), Marvin Minsky (1927–2016), Nathaniel Rochester (1919–2001), Claude E. Shannon (1916–2001)

“Artificial intelligence (AI) is the science of computers doing things that normally require human intelligence to accomplish. The term was coined in 1955 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in their proposal for the “Dartmouth Summer Research Project on Artificial Intelligence,” a two-month, 10-person institute that was held at Dartmouth College during the summer of 1956.

Today we consider the authors of the proposal to be the “founding fathers” of AI. Their primary interest was to lay the groundwork for a future generation of machines that would use abstraction to let them mirror the way humans think. So the founding fathers set off a myriad of different research projects, including attempts to understand written language, solve logic problems, describe visual scenes, and pretty much replicate anything that a human brain could do.

The term artificial intelligence has gone in and out of vogue over the years, with people interpreting the concept in different ways. Computer scientists defined the term as describing academic pursuits such as computer vision, robotics, and planning, whereas the public—and popular culture—has tended to focus on science-fiction applications such as machine cognition and self-awareness. On Star Trek (“The Ultimate Computer,” 1968), the AI-based M5 computer could run a starship without a human crew—and then quickly went berserk and started destroying other starships during a training exercise. The Terminator movies presented Skynet as a global AI network bent on destroying all of humanity.

Only recently has AI come to be accepted in the public lexicon as a legitimate technology with practical applications. The reason is the success of narrowly focused AI systems that have outperformed humans at tasks that require exceptional human intelligence. Today AI is divided into many subfields, including machine learning, natural language processing, neural networks, deep learning, and others. For their work on AI, Minsky was awarded the A.M. Turing award in 1969, and McCarty in 1971.”

SEE ALSO Rossum’s Universal Robots (1920), Metropolis (1927), Isaac Asimov’s Three Laws of Robotics (1942), HAL 9000 Computer (1968), Japan’s Fifth Generation Computer Systems (1981), Home-Cleaning Robot (2002), Artificial General Intelligence (AGI) (~2050), The Limits of Computation? (~9999)

“Artificial intelligence allows computers to do things that normally require human intelligence, such as recognizing patterns, classifying objects, and learning.”

Fair Use Source: B07C2NQSPV

Categories
History Software Engineering

Turing Test of Artificial Intelligence (AI) – 1951 AD

Return to Timeline of the History of Computers

1951

The Turing Test

Alan Turing (1912–1954)

““Can machines think?” That’s the question Alan Turing asked in his 1951 paper, “Computing Machinery and Intelligence.” Turing envisioned a day when computers would have as much storage and complexity as a human brain. When computers had so much storage, he reasoned, it should be possible to program such a wide range of facts and responses that a machine might appear to be intelligent. How, then, Turing asked, could a person know if a machine was truly intelligent, or merely presenting as such?

Turing’s solution was to devise a test of machine intelligence. The mark of intelligence, Turing argued, was not the ability to multiply large numbers or play chess, but to engage in a natural conversation with another intelligent being.

In Turing’s test, a human, playing the role of an interrogator, is able to communicate (in what we would now call a chat room) with two other entities: another human and a computer. The interrogator’s job is to distinguish the human from the computer; the computer’s goal is to convince the interrogator that it is a person, and that the other person is merely a simulation of intelligence. If a computer could pass such a test, Turing wrote, then there would be as much reason to assume that it was conscious as there would be to assume that any human was conscious. According to Turing, the easiest way to create a computer that could pass his test would be to build one that could learn and then teach it from “birth” as if it were a child.

In the years that followed, programs called chatbots, capable of conducting conversations, appeared to pass the test by fooling unsuspecting humans into thinking they were intelligent. The first of these, ELIZA, was invented in 1966 by MIT professor Joseph Weizenbaum (1923–2008). In one case, ELIZA was left running on a teletype, and a visitor to Weizenbaum’s office thought he was text-chatting with Weizenbaum at his home office, rather than with an artificial intelligence (AI) program. According to experts, however, ELIZA didn’t pass the Turing test because the visitor wasn’t told in advance that the “person” at the other end of the teleprinter might be a computer.

SEE ALSO ELIZA (1965), Computer Is World Chess Champion (1997), Computer Beats Master at Game of Go (2016)

“In the movie Blade Runner, starring Harrison Ford, the fictional Voight-Kampff test can distinguish a human from a “replicant” by measuring eye dilation during a stressful conversation.”

Fair Use Source: B07C2NQSPV

Categories
History Software Engineering

The Bit – Binary Digit 0 or 1 – 1948 AD

Return to Timeline of the History of Computers

1948

The Bit

Claude E. Shannon (1916–2001), John W. Tukey (1915–2000)

“It was the German mathematician Gottfried Wilhelm Leibniz (1646–1716) who first established the rules for performing arithmetic with binary numbers. Nearly 250 years later, Claude E. Shannon realized that a binary digit—a 0 or a 1—was the fundamental, indivisible unit of information.

Shannon earned his PhD from MIT in 1940 and then took a position at the Institute for Advanced Study in Princeton, New Jersey, where he met and collaborated with the institute’s leading mathematicians working at the intersection of computing, cryptography, and nuclear weapons, including John von Neumann, Albert Einstein, Kurt Gödel, and, for two months, Alan Turing.

In 1948, Shannon published “A Mathematical Theory of Communication” in the Bell System Technical Journal. The article was inspired in part by classified work that Shannon had done on cryptography during the war. In it, he created a mathematical definition of a generalized communications system, consisting of a message to be sent, a transmitter to convert the message into a signal, a channel through which the signal is sent, a receiver, and a destination, such as a person or a machine “for whom the message is intended.”

Shannon’s paper introduced the word bit, a binary digit, as the basic unit of information. While Shannon attributed the word to American statistician John W. Tukey, and the word had been used previously by other computing pioneers, Shannon provided a mathematical definition of a bit: rather than just a 1 or a 0, it is information that allows the receiver to limit possible decisions in the face of uncertainty. One of the implications of Shannon’s work is that every communications channel has a theoretical upper bound—a maximum number of bits that it can carry per second. As such, Shannon’s theory has been used to analyze practically every communications system ever developed—from handheld radios to satellite communications—as well as data-compression systems and even the stock market.

Shannon’s work illuminates a relationship between information and entropy, thus establishing a connection between computation and physics. Indeed, noted physicist Stephen Hawking framed much of his analysis of black holes in terms of the ability to destroy information and the problems created as a result.”

SEE ALSO Vernam Cipher (1917), Error-Correcting Codes (1950)

Mathematician and computer scientist Claude E. Shannon.

Fair Use Source: B07C2NQSPV

Categories
History

Colossus – 1943 A.D.

Return to Timeline of the History of Computers

1943

Colossus

Thomas Harold Flowers (1905–1998), Sidney Broadhurst (1893–1969), W. T. Tutte (1917–2002)

“Colossus was the first electronic digital computing machine, designed and successfully used during World War II by the United Kingdom to crack the German High Command military codes. “Electronic” means that it was built with tubes, which made Colossus run more than 500 times faster than the relay-based computing machines of the day. It was also the first computer to be manufactured in quantity.

A total of 10 “Colossi” were clandestinely built at Bletchley Park, Britain’s ultra-secret World War II cryptanalytic center, between 1943 and 1945 to crack the wireless telegraph signals encrypted with a special system developed by C. Lorenz AG, a German electronics firm. After the war the Colossi were destroyed or dismantled for their parts to protect the secret of the United Kingdom’s cryptanalytic prowess.

Colossus was far more sophisticated than the electromechanical Bombe machines that Alan Turing designed to crack the simpler Enigma cipher used by the Germans for battlefield encryption. Whereas Enigma used between three and eight encrypting rotors to scramble characters, the Lorenz system involved 12 wheels, with each wheel adding more mathematical complexity, and thus required a cipher-cracking machine with considerably more speed and agility.

Electronic tubes provided Colossus with the speed that it required. But that speed meant that Colossus needed a similarly fast input system. It used punched paper tape running at 5,000 characters per second, the tape itself moving at 27 miles per hour. Considerable engineering kept the tape properly tensioned, preventing rips and tears.

The agility was provided by a cryptanalysis technique designed by Alan Turing called Turingery, which inferred the cryptographic pattern of each Lorenz cipher wheel, and a second algorithm. The second algorithm, designed by British mathematician W. T. Tutte, determined the starting position of the wheels, which the Germans changed for each group of messages. The Colossi themselves were operated by a group of cryptanalysts that included 272 women from the Women’s Royal Naval Service (WRNS) and 27 men.”

SEE ALSO Manchester SSEM (1948)

The Colossus computing machine was used to read Nazi codes at Bletchley Park, England, during World War II.

Fair Use Source: B07C2NQSPV

Categories
History

Church-Turing Thesis – 1936 A.D.

Return to Timeline of the History of Computers

1936

Church-Turing Thesis

David Hilbert (1862–1943), Alonzo Church (1903–1995), Alan Turing (1912–1954)

“Computer science theory seeks to answer two fundamental questions about the nature of computers and computation: are there theoretical limits regarding what is possible to compute, and are there practical limits?

American mathematician Alonzo Church and British computer scientist Alan Turing each published an answer to these questions in 1936. They did it by answering a challenge posed by the eminent German mathematician David Hilbert eight years earlier.

Hilbert’s challenge, the Entscheidungsproblem (German for “decision problem”), asked if there was a mathematical procedure—an algorithm—that could be applied to determine if any given mathematical proposition was true or false. Hilbert had essentially asked if the core work of mathematics, the proving of theorems, could be automated.

Church answered Hilbert by developing a new way of describing mathematical functions and number theory called the Lambda calculus. With it, he showed that the Entscheidungsproblem could not be solved in general: there was no general algorithmic procedure for proving or disproving theorems. He published his paper in April 1936.

Turing took a radically different approach: he created a mathematical definition of a simple, abstract machine that could perform computation. Turing then showed that such a machine could in principle perform any computation and run any algorithm—it could even simulate the operation of other machines. Finally, he showed that while such machines could compute almost anything, there was no way to know if a computation would eventually complete, or if it would continue forever. Thus, the Entscheidungsproblem was unsolvable.

Turing went to Princeton University in September 1936 to study with Church, where the two discovered that the radically different approaches were, in fact, mathematically equivalent. Turing’s paper was published in November 1936; he stayed on and completed his PhD in June 1938, with Church as his PhD advisor.”

SEE ALSO Colossus (1943), EDVAC First Draft Report (1945), NP-Completeness (1971)

Statue of Alan Turing at Bletchley Park, the center of Britain’s codebreaking operations during World War II.

Fair Use Source: B07C2NQSPV