Categories
Artificial Intelligence Cloud Data Science - Big Data Networking Software Engineering

AIoT – Artificial intelligence of Things

” (WP)

The Artificial Intelligence of Things (AIoT) is the combination of Artificial intelligence (AI) technologies with the Internet of things (IoT) infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics[1] [2] [3]

See also

References

  1. ^ Ghosh, Iman (12 August 2020). “AIoT: When Artificial Intelligence Meets the Internet of Things”Visual Capitalist. Retrieved 22 September 2020.
  2. ^ Lin, Yu-Jin; Chuang, Chen-Wei; Yen, Chun-Yueh; Huang, Sheng-Hsin; Huang, Peng-Wei; Chen, Ju-Yi; Lee, Shuenn-Yuh (March 2019). “Artificial Intelligence of Things Wearable System for Cardiac Disease Detection”2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS): 67–70. doi:10.1109/AICAS.2019.8771630. Retrieved 22 September 2020.
  3. ^ Chu, William Cheng-Chung; Shih, Chihhsiong; Chou, Wen-Yi; Ahamed, Sheikh Iqbal; Hsiung, Pao-Ann (November 2019). “Artificial Intelligence of Things in Sports Science: Weight Training as an Example”Computer52 (11): 52–61. doi:10.1109/MC.2019.2933772ISSN 1558-0814. Retrieved 22 September 2020.

Categories

” (WP)

Sources:

Fair Use Sources:

Categories
Data Science - Big Data Software Engineering

Ubiquitous computing – Ubicomp

” (WP)

Ubiquitous computing (or “ubicomp“) is a concept in software engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computingubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computerstablets and terminals in everyday objects such as a refrigerator or a pair of glasses. The underlying technologies to support ubiquitous computing include Internet, advanced middlewareoperating systemmobile codesensorsmicroprocessors, new I/O and user interfacescomputer networks, mobile protocols, location and positioning, and new materials.

This paradigm is also described as pervasive computing,[1] ambient intelligence,[2] or “everyware”.[3] Each term emphasizes slightly different aspects. When primarily concerning the objects involved, it is also known as physical computing, the Internet of Things (IoT)haptic computing,[4] and “things that think”. Rather than propose a single definition for ubiquitous computing and for these related terms, a taxonomy of properties for ubiquitous computing has been proposed, from which different kinds or flavors of ubiquitous systems and applications can be described.[5]

Ubiquitous computing touches on distributed computingmobile computing, location computing, mobile networking, sensor networkshuman–computer interaction, context-aware smart home technologies, and artificial intelligence.

Core concepts

Ubiquitous computing is the concept of using small internet connected and inexpensive computers to help with everyday functions in an automated fashion. For example, a domestic ubiquitous computing environment might interconnect lighting and environmental controls with personal biometric monitors woven into clothing so that illumination and heating conditions in a room might be modulated, continuously and imperceptibly. Another common scenario posits refrigerators “aware” of their suitably tagged contents, able to both plan a variety of menus from the food actually on hand, and warn users of stale or spoiled food.[6]

Ubiquitous computing presents challenges across computer science: in systems design and engineering, in systems modelling, and in user interface design. Contemporary human-computer interaction models, whether command-line, menu-driven, or GUI-based, are inappropriate and inadequate to the ubiquitous case. This suggests that the “natural” interaction paradigm appropriate to a fully robust ubiquitous computing has yet to emerge – although there is also recognition in the field that in many ways we are already living in a ubicomp world (see also the main article on natural user interfaces). Contemporary devices that lend some support to this latter idea include mobile phonesdigital audio playersradio-frequency identification tags, GPS, and interactive whiteboards.

Mark Weiser proposed three basic forms for ubiquitous computing devices:[7]

  • Tabs: a wearable device that is approximately a centimeter in size
  • Pads: a hand-held device that is approximately a decimeter in size
  • Boards: an interactive larger display device that is approximately a meter in size

Ubiquitous computing devices proposed by Mark Weiser are all based around flat devices of different sizes with a visual display.[8] Expanding beyond those concepts there is a large array of other ubiquitous computing devices that could exist. Some of the additional forms that have been conceptualized are:[5]

  • Dust: miniaturized devices can be without visual output displays, e.g. micro electro-mechanical systems (MEMS), ranging from nanometres through micrometers to millimetres. See also Smart dust.
  • Skin: fabrics based upon light emitting and conductive polymers, organic computer devices, can be formed into more flexible non-planar display surfaces and products such as clothes and curtains, see OLED display. MEMS device can also be painted onto various surfaces so that a variety of physical world structures can act as networked surfaces of MEMS.
  • Clay: ensembles of MEMS can be formed into arbitrary three dimensional shapes as artefacts resembling many different kinds of physical object (see also tangible interface).

In Manuel Castells‘ book The Rise of the Network Society, Castells puts forth the concept that there is going to be a continuous evolution of computing devices. He states we will progress from stand-alone microcomputers and decentralized mainframes towards pervasive computing. Castells’ model of a pervasive computing system, uses the example of the Internet as the start of a pervasive computing system. The logical progression from that paradigm is a system where that networking logic becomes applicable in every realm of daily activity, in every location and every context. Castells envisages a system where billions of miniature, ubiquitous inter-communication devices will be spread worldwide, “like pigment in the wall paint”.

Ubiquitous computing may be seen to consist of many layers, each with their own roles, which together form a single system:

  • Layer 1: Task management layer
    • Monitors user task, context and index
    • Map user’s task to need for the services in the environment
    • To manage complex dependencies
  • Layer 2: Environment management layer
    • To monitor a resource and its capabilities
    • To map service need, user level states of specific capabilities
  • Layer 3: Environment layer
    • To monitor a relevant resource
    • To manage reliability of the resources

History

Mark Weiser coined the phrase “ubiquitous computing” around 1988, during his tenure as Chief Technologist of the Xerox Palo Alto Research Center (PARC). Both alone and with PARC Director and Chief Scientist John Seely Brown, Weiser wrote some of the earliest papers on the subject, largely defining it and sketching out its major concerns.[7][9][10]

Recognizing the effects of extending processing power

Recognizing that the extension of processing power into everyday scenarios would necessitate understandings of social, cultural and psychological phenomena beyond its proper ambit, Weiser was influenced by many fields outside computer science, including “philosophyphenomenologyanthropologypsychologyand sociology of science “. He was explicit about “the humanistic origins of the invisible ideal'”,[10] referencing as well the ironically dystopian Philip K. Dick novel Ubik.

Andy Hopper from Cambridge University UK proposed and demonstrated the concept of “Teleporting” – where applications follow the user wherever he/she moves.

Roy Want, while a researcher and student working under Andy Hopper at Cambridge University, worked on the “Active Badge System”, which is an advanced location computing system where personal mobility that is merged with computing.

Bill Schilit (now at Google) also did some earlier work in this topic, and participated in the early Mobile Computing workshop held in Santa Cruz in 1996.

Ken Sakamura of the University of TokyoJapan leads the Ubiquitous Networking Laboratory (UNL), Tokyo as well as the T-Engine Forum. The joint goal of Sakamura’s Ubiquitous Networking specification and the T-Engine forum, is to enable any everyday device to broadcast and receive information.[11][12]

MIT has also contributed significant research in this field, notably Things That Think consortium (directed by Hiroshi IshiiJoseph A. Paradiso and Rosalind Picard) at the Media Lab[13] and the CSAIL effort known as Project Oxygen.[14] Other major contributors include University of Washington‘s Ubicomp Lab (directed by Shwetak Patel), Dartmouth College‘s DartNets LabGeorgia Tech‘s College of ComputingCornell University‘s People Aware Computing LabNYU‘s Interactive Telecommunications ProgramUC Irvine‘s Department of Informatics, Microsoft ResearchIntel Research and Equator,[15] Ajou University UCRi & CUS.[16]

Examples

One of the earliest ubiquitous systems was artist Natalie Jeremijenko‘s “Live Wire”, also known as “Dangling String”, installed at Xerox PARC during Mark Weiser’s time there.[17] This was a piece of string attached to a stepper motor and controlled by a LAN connection; network activity caused the string to twitch, yielding a peripherally noticeable indication of traffic. Weiser called this an example of calm technology.[18]

A present manifestation of this trend is the widespread diffusion of mobile phones. Many mobile phones support high speed data transmission, video services, and other services with powerful computational ability. Although these mobile devices are not necessarily manifestations of ubiquitous computing, there are examples, such as Japan’s Yaoyorozu (“Eight Million Gods”) Project in which mobile devices, coupled with radio frequency identification tags demonstrate that ubiquitous computing is already present in some form.[19]

Ambient Devices has produced an “orb”, a “dashboard”, and a “weather beacon“: these decorative devices receive data from a wireless network and report current events, such as stock prices and the weather, like the Nabaztag produced by Violet Snowden.

The Australian futurist Mark Pesce has produced a highly configurable 52-LED LAMP enabled lamp which uses Wi-Fi named MooresCloud after Gordon Moore.[20]

The Unified Computer Intelligence Corporation launched a device called Ubi – The Ubiquitous Computer designed to allow voice interaction with the home and provide constant access to information.[21]

Ubiquitous computing research has focused on building an environment in which computers allow humans to focus attention on select aspects of the environment and operate in supervisory and policy-making roles. Ubiquitous computing emphasizes the creation of a human computer interface that can interpret and support a user’s intentions. For example, MIT’s Project Oxygen seeks to create a system in which computation is as pervasive as air:

In the future, computation will be human centered. It will be freely available everywhere, like batteries and power sockets, or oxygen in the air we breathe…We will not need to carry our own devices around with us. Instead, configurable generic devices, either handheld or embedded in the environment, will bring computation to us, whenever we need it and wherever we might be. As we interact with these “anonymous” devices, they will adopt our information personalities. They will respect our desires for privacy and security. We won’t have to type, click, or learn new computer jargon. Instead, we’ll communicate naturally, using speech and gestures that describe our intent…[22]

This is a fundamental transition that does not seek to escape the physical world and “enter some metallic, gigabyte-infested cyberspace” but rather brings computers and communications to us, making them “synonymous with the useful tasks they perform”.[19]

Network robots link ubiquitous networks with robots, contributing to the creation of new lifestyles and solutions to address a variety of social problems including the aging of population and nursing care.[23]

Issues

Privacy is easily the most often-cited criticism of ubiquitous computing (ubicomp), and may be the greatest barrier to its long-term success.[24]

Public policy problems are often “preceded by long shadows, long trains of activity”, emerging slowly, over decades or even the course of a century. There is a need for a long-term view to guide policy decision making, as this will assist in identifying long-term problems or opportunities related to the ubiquitous computing environment. This information can reduce uncertainty and guide the decisions of both policy makers and those directly involved in system development (Wedemeyer et al. 2001). One important consideration is the degree to which different opinions form around a single problem. Some issues may have strong consensus about their importance, even if there are great differences in opinion regarding the cause or solution. For example, few people will differ in their assessment of a highly tangible problem with physical impact such as terrorists using new weapons of mass destruction to destroy human life. The problem statements outlined above that address the future evolution of the human species or challenges to identity have clear cultural or religious implications and are likely to have greater variance in opinion about them.[19]

Ubiquitous computing research centres

This is a list of notable institutions who claim to have a focus on Ubiquitous computing sorted by country:Canada

Topological Media Lab, Concordia University, CanadaFinland

Community Imaging Group, University of Oulu, FinlandGermany

Tele cooperation Office (TECO), Karlsruhe Institute of Technology, GermanyIndia

Ubiquitous Computing Research Resource Centre (UCRC), Centre for Development of Advanced Computing[25]Pakistan

Centre for Research in Ubiquitous Computing (CRUC), Karachi, Pakistan.Sweden

Mobile Life Centre, Stockholm UniversityUnited Kingdom

Mixed Reality Lab, University of Nottingham

See also

References

  1. ^ Nieuwdorp, E. (2007). “The pervasive discourse”. Computers in Entertainment5(2): 13. doi:10.1145/1279540.1279553S2CID 17759896.
  2. ^ Hansmann, Uwe (2003). Pervasive Computing: The Mobile World. Springer. ISBN 978-3-540-00218-5.
  3. ^ Greenfield, Adam (2006). Everyware: The Dawning Age of Ubiquitous Computing. New Riders. pp. 11–12. ISBN 978-0-321-38401-0.
  4. ^ “World Haptics Conferences”. Haptics Technical Committee. Archived from the original on 16 November 2011.
  5. a b Poslad, Stefan (2009). Ubiquitous Computing Smart Devices, Smart Environments and Smart Interaction (PDF). Wiley. ISBN 978-0-470-03560-3.
  6. ^ Kang, Byeong-Ho (January 2007). “Ubiquitous Computing Environment Threats and Defensive Measures”International Journal of Multimedia and Ubiquitous Engineering2 (1): 47–60. Retrieved 2019-03-22.
  7. a b Weiser, Mark (1991). “The Computer for the 21st Century”. Archived from the original on 22 October 2014.
  8. ^ Weiser, Mark (March 23, 1993). “Some Computer Science Issues in Ubiquitous Computing”. CACM. Retrieved May 28, 2019.
  9. ^ Weiser, M.; Gold, R.; Brown, J.S. (1999-05-11). “Ubiquitous computing”. Archived from the original on 10 March 2009.
  10. a b Weiser, Mark (17 March 1996). “Ubiquitous computing”. Archived from the original on 2 June 2018.
  11. ^ Krikke, J (2005). “T-Engine: Japan’s ubiquitous computing architecture is ready for prime time”. IEEE Pervasive Computing4 (2): 4–9. doi:10.1109/MPRV.2005.40S2CID 11365911.
  12. ^ “T-Engine Forum Summary”. T-engine.org. Archived from the original on 21 October 2018. Retrieved 25 August 2011.
  13. ^ “MIT Media Lab – Things That Think Consortium”MIT. Retrieved 2007-11-03.
  14. ^ “MIT Project Oxygen: Overview”MIT. Retrieved 2007-11-03.
  15. ^ “Equator”UCL. Retrieved 2009-11-19.
  16. ^ “Center of excellence for Ubiquitous System” (in Korean). CUS. Archived from the original on 2 October 2011.
  17. ^ Weiser, Mark (2017-05-03). “Designing Calm Technology”. Retrieved May 27,2019.
  18. ^ Weiser, Mark; Gold, Rich; Brown, John Seely (1999). “The Origins of Ubiquitous Computing Research at PARC in the Late 1980s”IBM Systems Journal38 (4): 693. doi:10.1147/sj.384.0693S2CID 38805890.
  19. a b c Winter, Jenifer (December 2008). “Emerging Policy Problems Related to Ubiquitous Computing: Negotiating Stakeholders’ Visions of the Future”. Knowledge, Technology & Policy21 (4): 191–203. doi:10.1007/s12130-008-9058-4hdl:10125/63534S2CID 109339320.
  20. ^ Fingas, Jon (13 October 2012). “MooresCloud Light runs Linux, puts LAMP on your lamp (video)”. Engadget.com. Retrieved 22 March 2019.
  21. ^ “Ubi Cloud”. Theubi.com. Archived from the original on 2 January 2015.
  22. ^ “MIT Project Oxygen: Overview”. Archived from the original on July 5, 2004.
  23. ^ “Network Robot Forum”. Archived from the original on October 24, 2007.
  24. ^ Hong, Jason I.; Landay, James A. (June 2004). “An architecture for privacy-sensitive ubiquitous computing” (PDF). Proceedings of the 2nd international conference on Mobile systems, applications, and services – MobiSYS ’04. pp. 177=189. doi:10.1145/990064.990087ISBN 1581137931S2CID 3776760.
  25. ^ “Ubiquitous Computing Projects”Department of Electronics & Information Technology (DeitY). Ministry of Communications & IT, Government of India. Archived from the original on 2015-07-07. Retrieved 2015-07-07.

Further reading

External links

Wikimedia Commons has media related to Ubiquitous computing.

Categories

” (WP)

Sources:

Fair Use Sources:

Categories
Artificial Intelligence Cloud Data Science - Big Data Software Engineering

AI – Artificial intelligence

“AI” redirects here. For other uses, see AI (disambiguation) and Artificial intelligence (disambiguation).

See also: Artificial Intelligence (AI) Coined – 1955 AD

” (WP)

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen. ‘Strong’ AI is usually labelled as artificial general intelligence (AGI) while attempts to emulate ‘natural’ intelligence have been called artificial biological intelligence (ABI). Leading AI textbooks define the field as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term “artificial intelligence” is often used to describe machines that mimic “cognitive” functions that humans associate with the human mind, such as “learning” and “problem solving”.[4]

As machines become increasingly capable, tasks considered to require “intelligence” are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler’s Theorem says “AI is whatever hasn’t been done yet.”[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] and also imperfect-information games like poker,[11] self-driving cars, intelligent routing in content delivery networks, and military simulations.[12]

Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[13][14] followed by disappointment and the loss of funding (known as an “AI winter“),[15][16] followed by new approaches, success and renewed funding.[14][17] After AlphaGo successfully defeated a professional Go player in 2015, artificial intelligence once again attracted widespread global attention.[18] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[19] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning“),[20] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[23][24][25] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[19]

The traditional problems (or goals) of AI research include reasoningknowledge representationplanninglearningnatural language processingperception and the ability to move and manipulate objects.[20] AGI is among the field’s long-term goals.[26] Approaches include statistical methodscomputational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer scienceinformation engineeringmathematicspsychologylinguisticsphilosophy, and many other fields.

The field was founded on the assumption that human intelligence “can be so precisely described that a machine can be made to simulate it”.[27] This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by mythfiction and philosophy since antiquity.[32] Some people also consider AI to be a danger to humanity if it progresses unabated.[33][34] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[35]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[36][17]

History

Main articles: History of artificial intelligence and Timeline of artificial intelligenceSilver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence

Thought-capable artificial beings appeared as storytelling devices in antiquity,[37] and have been common in fiction, as in Mary Shelley‘s Frankenstein or Karel Čapek‘s R.U.R.[38] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[32]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing‘s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.[39] Along with concurrent discoveries in neurobiologyinformation theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed changing the question from whether a machine was intelligent, to “whether or not it is possible for machinery to show intelligent behaviour”.[40] The first work that is now generally recognized as AI was McCullouch and Pitts‘ 1943 formal design for Turing-complete “artificial neurons”.[41]

The field of AI research was born at a workshop at Dartmouth College in 1956,[42] where the term “Artificial Intelligence” was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.[43] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[44] They and their students produced programs that the press described as “astonishing”:[45] computers were learning checkers strategies (c. 1954)[46] (and by 1959 were reportedly playing better than the average human),[47] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[48] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[49] and laboratories had been established around the world.[50] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved”.[13]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill[51] and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter“,[15] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[52] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[14] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[16]

The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.[53]

In the late 1990s and early 21st century, AI began to be used for logistics, data miningmedical diagnosis and other areas.[36] The success was due to increasing computational power (see Moore’s law and transistor count), greater emphasis on solving specific problems, new ties between AI and other fields (such as statisticseconomics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[54] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.[55]

In 2011, in a Jeopardy! quiz show exhibition match, IBM‘s question answering systemWatson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[56] Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[57] The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[58] as do intelligent personal assistants in smartphones.[59] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[10][60] In the 2017 Future of Go SummitAlphaGo won a three-game match with Ke Jie,[61] who at the time continuously held the world No. 1 ranking for two years.[62][63] Deep Blue‘s Murray Campbell called AlphaGo’s victory “the end of an era… board games are more or less done[64] and it’s time to move on.”[65] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess. AlphaGo was later improved, generalized to other games like chess, with AlphaZero;[66] and MuZero[67] to play many different video games, that were previously handled separately,[68] in addition to board games. Other programs handle imperfect-information games; such as for poker at a superhuman level, Pluribus (poker bot)[69] and Cepheus (poker bot).[11] See: General game playing.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.[70] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[17] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[70] In a 2017 survey, one in five companies reported they had “incorporated AI in some offerings or processes”.[71][72] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an “AI superpower”.[73][74]

By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks, albeit without the system attaining commonsense understanding of the contents of the benchmarks.[75] DeepMind’s AlphaFold 2 (2020) demonstrated the ability to determine, in hours rather than months, the 3D structure of a protein. Facial recognition advanced to where, under some circumstances, some systems claim to have a 99% accuracy rate.[76]

Basics

Computer science defines AI research as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”[77]

A typical AI analyzes its environment and takes actions that maximize its chance of success.[3] An AI’s intended utility function (or goal) can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Perform actions mathematically similar to ones that succeeded in the past”). Goals can be explicitly defined or induced. If the AI is programmed for “reinforcement learning“, goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[78] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[79] Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[80]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following (optimal for first player) recipe for play at tic-tac-toe:[81]

  1. If someone has a “threat” (that is, two in a row), take the remaining square. Otherwise,
  2. if a move “forks” to create two threats at once, play that move. Otherwise,
  3. take the center square if it is free. Otherwise,
  4. if your opponent has played in a corner, take the opposite corner. Otherwise,
  5. take an empty corner if one exists. Otherwise,
  6. take any empty square.

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world.[citation needed] These learners could therefore derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is seldom possible to consider every possibility, because of the phenomenon of “combinatorial explosion“, where the time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering a broad range of possibilities unlikely to be beneficial.[82][83] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered.[84]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza“. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the artificial neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[85][86]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor“: The simplest theory that explains the data is the likeliest. Therefore, according to Occam’s razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.The blue line could be an example of overfitting a linear function due to random noise.

Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.[87] Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[88] A real-world example is that, unlike humans, current image classifiers often don’t primarily make judgments from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in “adversarial” images that the system misclassifies.[c][89][90]A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.

Compared with humans, existing AI lacks several features of human “commonsense reasoning“; most notably, humans have powerful mechanisms for reasoning about “naïve physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence” (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators[91][92][93]). This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[94][95][96]

Challenges

The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.[97]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[20]

Reasoning, problem solving

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[98] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[99]

These algorithms proved to be insufficient for solving large reasoning problems because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[82] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[100]

Knowledge representation

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.Main articles: Knowledge representation and Commonsense knowledge

Knowledge representation[101] and knowledge engineering[102] are central to classical AI research. Some “expert systems” attempt to gather explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[103] situations, events, states and time;[104] causes and effects;[105] knowledge about knowledge (what we know about what other people know);[106] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[107] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[108] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[109] scene interpretation,[110] clinical decision support,[111] knowledge discovery (mining “interesting” and actionable inferences from large databases),[112] and other areas.[113]

Among the most difficult problems in knowledge representation are:Default reasoning and the qualification problemMany of the things people know take the form of “working assumptions”. For example, if a bird comes up in conversation, people typically picture a fist-sized animal that sings and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[114] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[115]Breadth of commonsense knowledgeThe number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.[116]Subsymbolic form of some commonsense knowledgeMuch of what people know is not represented as “facts” or “statements” that they could express verbally. For example, a chess master will avoid a particular chess position because it “feels too exposed”[117] or an art critic can take one look at a statue and realize that it is a fake.[118] These are non-conscious and sub-symbolic intuitions or tendencies in the human brain.[119] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AIcomputational intelligence, or statistical AI will provide ways to represent this knowledge.[119]

Planning

hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.Main article: Automated planning and scheduling

Intelligent agents must be able to set goals and achieve them.[120] They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or “value”) of available choices.[121]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[122] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.[123]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[124]

Learning

Main article: Machine learningFor this project the AI had to find the typical patterns in the colors and brushstrokes of Renaissance painter Raphael. The portrait shows the face of the actress Ornella Muti, “painted” by AI in the style of Raphael.

Machine learning (ML), a fundamental concept of AI research since the field’s inception,[d] is the study of computer algorithms that improve automatically through experience.[e][127]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[127] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[128] In reinforcement learning[129] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing

parse tree represents the syntactic structure of a sentence according to some formal grammar.Main article: Natural language processing

Natural language processing[130] (NLP) allows machines to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrievaltext miningquestion answering[131] and machine translation.[132] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[133] By 2019, transformer-based deep learning architectures could generate coherent text.[134]

Perception

Main articles: Machine perceptionComputer vision, and Speech recognitionFeature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data.

Machine perception[135] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[136] facial recognition, and object recognition.[137] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its “object model” to assess that fifty-meter pedestrians do not exist.[138]

Motion and manipulation

Main article: Robotics

AI is heavily used in robotics.[139] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[140] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient’s breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into “primitives” such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[141][142][143] Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.[144][145] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[146]

Social intelligence

Main article: Affective computingKismet, a robot with rudimentary social skills[147]

Moravec’s paradox can be extended to many forms of social intelligence.[148][149] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[150] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects.[151][152][153] Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[154]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. The ability to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.[155] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.[156]

General intelligence

Main articles: Artificial general intelligence and AI-complete

Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese Fifth Generation Computer Systems initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, most current AI researchers work instead on tractable “narrow AI” applications (such as medical diagnosis or automobile navigation).[157] Many researchers predict that such “narrow AI” work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[26][158] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a “generalized artificial intelligence” that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[159][160][161] Besides transfer learning,[162] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to “slurp up” a comprehensive knowledge base from the entire unstructured Web.[163] Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, “Master Algorithm” could lead to AGI.[164] Finally, a few “emergent” approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[165][166]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete“, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

Approaches

No established unifying theory or paradigm guides AI research. Researchers disagree about many issues.[f] A few of the most long-standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[23] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of unrelated problems?[24]

Cybernetics and brain simulation

Main articles: Cybernetics and Computational neuroscience

In the 1940s and 1950s, a number of researchers explored the connection between neurobiologyinformation theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter‘s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[168] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

Symbolic

Main article: Symbolic AI

When access to digital computers became possible in the mid-1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon UniversityStanford, and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI “good old fashioned AI” or “GOFAI“.[169] During the 1960s, symbolic approaches had achieved great success at simulating high-level “thinking” in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[g] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Cognitive simulation

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive scienceoperations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[170][171]

Logic-based

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[23] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representationplanning and learning.[172] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[173]

Anti-logic or scruffy

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[174] found that solving difficult problems in vision and natural language processing required ad hoc solutions—they argued that no simple and general principle (like logic) would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[24] Commonsense knowledge bases (such as Doug Lenat‘s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[175]

Knowledge-based

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[176] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[52] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[177] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Sub-symbolic

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[25] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

Embodied intelligence

This includes embodiedsituatedbehavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[178] Their work revived the non-symbolic point of view of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[179][180][181][182]

Computational intelligence and soft computing

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[183] Artificial neural networks are an example of soft computing—they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systemsGrey system theoryevolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[184]

Statistical

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[h] Compared with GOFAI, new “statistical learning” techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring a semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[54][185] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language.[186] Critics note that the shift from GOFAI to statistical learning is often also a shift away from explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.[187][188]

Integrating the approaches

Intelligent agent paradigmAn intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm allows researchers to directly compare or even combine different approaches to isolated problems, by asking which agent is best at maximizing a given “goal function”. An agent that solves a specific problem can use any approach that works—some agents are symbolic and logical, some are sub-symbolic artificial neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. Building a complete agent requires researchers to address realistic problems of integration; for example, because sensory systems give uncertain information about the environment, planning systems must be able to function in the presence of uncertainty. The intelligent agent paradigm became widely accepted during the 1990s.[189]Agent architectures and cognitive architecturesResearchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.[190] A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modeling.[191] Some cognitive architectures are custom-built to solve a narrow problem; others, such as Soar, are designed to mimic human cognition and to provide insight into general intelligence. Modern extensions of Soar are hybrid intelligent systems that include both symbolic and sub-symbolic components.[97][192]

Tools

Main article: Computational tools for artificial intelligence

Applications

Main article: Applications of artificial intelligence

AI is relevant to any intellectual task.[193] Modern artificial intelligence techniques are pervasive[194] and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.[195]

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google Search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[196] prediction of judicial decisions,[197] targeting online advertisements, [193][198][199] and energy storage[200]

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[201] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[202]

AI can also produce Deepfakes, a content-altering technology. ZDNet reports, “It presents something that did not actually occur,” Though 88% of Americans believe Deepfakes can cause more harm than good, only 47% of them believe they can be targeted. The boom of election year also opens public discourse to threats of videos of falsified politician media.[203]

Philosophy and ethics

Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence

There are three philosophical questions related to AI:[204]

  1. Whether artificial general intelligence is possible; whether a machine can solve any problem that a human being can solve using intelligence, or if there are hard limits to what a machine can accomplish.
  2. Whether intelligent machines are dangerous; how humans can ensure that machines behave ethically and that they are used ethically.
  3. Whether a machine can have a mindconsciousness and mental states in the same sense that human beings do; if a machine can be sentient, and thus deserve certain rights − and if a machine can intentionally cause harm.

The limits of artificial general intelligence

Main articles: Philosophy of artificial intelligenceTuring testPhysical symbol systems hypothesisDreyfus’ critique of artificial intelligenceThe Emperor’s New Mind, and AI effectAlan Turing’s “polite convention”One need not decide if a machine can “think”; one need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test.[205]The Dartmouth proposal“Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” This conjecture was printed in the proposal for the Dartmouth Conference of 1956.[206]Newell and Simon’s physical symbol system hypothesis“A physical symbol system has the necessary and sufficient means of general intelligent action.” Newell and Simon argue that intelligence consists of formal operations on symbols.[207]Hubert Dreyfus argues that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a “feel” for the situation, rather than explicit symbolic knowledge. (See Dreyfus’ critique of AI.)[i][209]Gödelian argumentsGödel himself,[210]John Lucas (in 1961) and Roger Penrose (in a more detailed argument from 1989 onwards) made highly technical arguments that human mathematicians can consistently see the truth of their own “Gödel statements” and therefore have computational abilities beyond that of mechanical Turing machines.[211] However, some people do not agree with the “Gödelian arguments”.[212][213][214]The artificial brain argumentAn argument asserting that the brain can be simulated by machines and, because brains exhibit intelligence, these simulated brains must also exhibit intelligence − ergo, machines can be intelligent. Hans MoravecRay Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original.[165]The AI effectA hypothesis claiming that machines are already intelligent, but observers have failed to recognize it. For example, when Deep Blue beat Garry Kasparov in chess, the machine could be described as exhibiting intelligence. However, onlookers commonly discount the behavior of an artificial intelligence program by arguing that it is not “real” intelligence, with “real” intelligence being in effect defined as whatever behavior machines cannot do.

Ethical machines

Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use ethical reasoning to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.[215] Research in this area includes machine ethicsartificial moral agentsfriendly AI and discussion towards building a human rights framework is also in talks.[216]

Joseph Weizenbaum in Computer Power and Human Reason wrote that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy[j] was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum these points suggest that AI research devalues human life.[218]

Artificial moral agents

Wendell Wallach introduced the concept of artificial moral agents (AMA) in his book Moral Machines[219] For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as “Does Humanity Want Computers Making Moral Decisions”[220] and “Can (Ro)bots Really Be Moral”.[221] For Wallach, the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior, unlike the constraints which society may place on the development of AMAs.[222]

Machine ethics

Main article: Machine ethics

The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[223] The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: “Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics.”[224] Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition “Machine Ethics”[223] that stems from the AAAI Fall 2005 Symposium on Machine Ethics.[224]

Malevolent and friendly AI

Main article: Friendly artificial intelligence

Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent.[225] He argues that “any sufficiently advanced benevolence may be indistinguishable from malevolence.” Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.

One proposal to deal with this is to ensure that the first generally intelligent AI is ‘Friendly AI‘ and will be able to control subsequently developed AIs. Some question whether this kind of check could actually remain in place.

Leading AI researcher Rodney Brooks writes, “I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence.”[226]

Lethal autonomous weapons are of concern. Currently, 50+ countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI also want to limit the use of artificial soldiers and drones.[227]

Machine consciousness, sentience and mind

Main article: Artificial consciousness

If an AI system replicates all key aspects of human intelligence, will that system also be sentient—will it have a mind which has conscious experiences? This question is closely related to the philosophical problem as to the nature of human consciousness, generally referred to as the hard problem of consciousness.

Consciousness

Main articles: Hard problem of consciousness and Theory of mind

David Chalmers identified two problems in understanding the mind, which he named the “hard” and “easy” problems of consciousness.[228] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all. Human information processing is easy to explain, however human subjective experience is difficult to explain.

For example, consider what happens when a person is shown a color swatch and identifies it, saying “it’s red”. The easy problem only requires understanding the machinery in the brain that makes it possible for a person to know that the color swatch is red. The hard problem is that people also know something else—they also know what red looks like. (Consider that a person born blind can know that something is red without knowing what red looks like.)[k] Everyone knows subjective experience exists, because they do it every day (e.g., all sighted people know what red looks like). The hard problem is explaining how the brain creates it, why it exists, and how it is different from knowledge and other aspects of the brain.

Computationalism and functionalism

Main articles: Computationalism and Functionalism (philosophy of mind)

Computationalism is the position in the philosophy of mind that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing.[229] Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.

Strong AI hypothesis

Main article: Chinese room

The philosophical position that John Searle has named “strong AI” states: “The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.”[l] Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the “mind” might be.[231]

Robot rights

Main article: Robot rights

If a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human? This issue, now known as “robot rights“, is currently being considered by, for example, California’s Institute for the Future, although many critics believe that the discussion is premature.[232][233] Some critics of transhumanism argue that any hypothetical robot rights would lie on a spectrum with animal rights and human rights.[234] The subject is profoundly discussed in the 2010 documentary film Plug & Pray,[235] and many sci fi media such as Star Trek Next Generation, with the character of Commander Data, who fought being disassembled for research, and wanted to “become human”, and the robotic holograms in Voyager.

Superintelligence

Main article: Superintelligence

Are there limits to how intelligent machines—or human-machine hybrids—can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent.[158]

Technological singularity

Main articles: Technological singularity and Moore’s law

If research into Strong AI produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement.[236] The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario “singularity“.[237] Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.[237][158]

Insane Google technocrat inventor Ray Kurzweil has used Moore’s law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029 and predicts that the singularity will occur in 2045.[237]

Transhumanism

Main article: Transhumanism

Robot designer Hans Moravec, cyberneticist Kevin Warwick, and insane Google technocrat inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either.[238] This insane demonic science fiction technocracy police state idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.

Edward Fredkin argues that “artificial intelligence is the next stage in evolution”, an idea first proposed by Samuel Butler‘s “Darwin among the Machines” as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998.[239]

Impact

The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed.[240] A 2017 study by PricewaterhouseCoopers sees the People’s Republic of China gaining economically the most out of AI with 26,1% of GDP until 2030.[241] A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including “improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, [and] improving the efficiency of production systems through predictive maintenance”, while acknowledging potential risks.[194]

The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects.[242] Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist states that “the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution” is “worth taking seriously”.[243] Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at “high risk” of potential automation, while an OECD report classifies only 9% of U.S. jobs as “high risk”.[244][245][246] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[247] Author Martin Ford and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be “accessible to people with average capability”, even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that “we’re in uncharted territory” with AI.[35]

The potential negative effects of AI and automation were a major issue for Andrew Yang‘s 2020 presidential campaign in the United States.[248] Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that “I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous. And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don’t find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise there’s massive potential of misuse.”[249]

Risks of narrow AI

Main article: Workplace impact of artificial intelligence

Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[250]

Some are concerned about algorithmic bias, that AI programs may unintentionally become biased after processing data that exhibits bias.[251] Algorithms already have numerous applications in legal systems. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivistProPublica claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than the average COMPAS-assigned risk level of white defendants.[252]

Risks of general AI

Main article: Existential risk from artificial general intelligence

Physicist Stephen HawkingMicrosoft founder Bill Gates, history professor Yuval Noah Harari, and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could “spell the end of the human race“.[253][254][255][256]

The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.— Stephen Hawking[257]

In his book Superintelligence, philosopher Nick Bostrom provides an argument that artificial intelligence will pose a threat to humankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down. If this AI’s goals do not fully reflect humanity’s—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal. Bostrom also emphasizes the difficulty of fully conveying humanity’s values to an advanced AI. He uses the hypothetical example of giving an AI the goal to make humans smile to illustrate a misguided attempt. If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting “electrodes into the facial muscles of humans to cause constant, beaming grins” because that would be an efficient way to achieve its goal of making humans smile.[258] In his book Human Compatible, AI researcher Stuart J. Russell echoes some of Bostrom’s concerns while also proposing an approach to developing provably beneficial machines focused on uncertainty and deference to humans,[259]:173 possibly involving inverse reinforcement learning.[259]:191–193

Concern over risk from artificial intelligence has led to some high-profile donations and investments. A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1 billion to OpenAI, a nonprofit company aimed at championing responsible AI development.[260] The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.[261] Other technology industry leaders believe that artificial intelligence is helpful in its current form and will continue to assist humans. Oracle CEO Mark Hurd has stated that AI “will actually create more jobs, not less jobs” as humans will be needed to manage AI systems.[262] Facebook CEO Mark Zuckerberg believes AI will “unlock a huge amount of positive things,” such as curing disease and increasing the safety of autonomous cars.[263] In January 2015, Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making. The goal of the institute is to “grow wisdom with which we manage” the growing power of technology. Musk also funds companies developing artificial intelligence such as DeepMind and Vicarious to “just keep an eye on what’s going on with artificial intelligence.[264] I think there is potentially a dangerous outcome there.”[265][266]

For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.[267][268] Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.[269]

Regulation

Main articles: Regulation of artificial intelligence and Regulation of algorithms

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI);[270][271] it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union.[272] Regulation is considered necessary to both encourage AI and manage associated risks.[273][274] Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.[275]

In fiction

Main article: Artificial intelligence in fictionThe word “robot” itself was coined by Karel Čapek in his 1921 play R.U.R., the title standing for “Rossum’s Universal Robots”

Thought-capable artificial beings appeared as storytelling devices since antiquity,[37] and have been a persistent theme in science fiction.

A common trope in these works began with Mary Shelley‘s Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke’s and Stanley Kubrick’s 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.[276]

Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the “Multivac” series about a super-intelligent computer of the same name. Asimov’s laws are often brought up during lay discussions of machine ethics;[277] while almost all artificial intelligence researchers are familiar with Asimov’s laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[278]

Transhumanism (the merging of humans and machines) is explored in the manga Ghost in the Shell and the science-fiction series Dune. In the 1980s, artist Hajime Sorayama‘s Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later “the Gynoids” book followed that was used by or influenced movie makers including George Lucas and other creatives. Sorayama never considered these organic robots to be real part of nature but always an unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.

Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek‘s R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[279]

See also

Explanatory notes

  1. ^ The act of doling out rewards can itself be formalized or automated into a “reward function“.
  2. ^ Terminology varies; see algorithm characterizations.
  3. ^ Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. Some systems are so brittle that changing a single adversarial pixel predictably induces misclassification.
  4. ^ Alan Turing discussed the centrality of learning as early as 1950, in his classic paper “Computing Machinery and Intelligence“.[125] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: “An Inductive Inference Machine”.[126]
  5. ^ This is a form of Tom Mitchell‘s widely quoted definition of machine learning: “A computer program is set to learn from an experience E with respect to some task Tand some performance measure P if its performance on T as measured by Pimproves with experience E.”
  6. ^ Nils Nilsson writes: “Simply put, there is wide disagreement in the field about what AI is all about.”[167]
  7. ^ The most dramatic case of sub-symbolic AI being pushed into the background was the devastating critique of perceptrons by Marvin Minsky and Seymour Papert in 1969. See History of AIAI winter, or Frank Rosenblatt.[citation needed]
  8. ^ While such a “victory of the neats” may be a consequence of the field becoming more mature, AIMA states that in practice both neat and scruffy approaches continue to be necessary in AI research.
  9. ^ Dreyfus criticized the necessary condition of the physical symbol systemhypothesis, which he called the “psychological assumption”: “The mind can be viewed as a device operating on bits of information according to formal rules.”[208]
  10. ^ In the early 1970s, Kenneth Colby presented a version of Weizenbaum’s ELIZAknown as DOCTOR which he promoted as a serious therapeutic tool.[217]
  11. ^ This is based on Mary’s Room, a thought experiment first proposed by Frank Jackson in 1982
  12. ^ This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle’s original formulation was “The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.”[230] Strong AI is defined similarly by Russell & Norvig (2003, p. 947): “The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the ‘weak AI’ hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the ‘strong AI’ hypothesis.”

References

  1. ^ Poole, Mackworth & Goebel 1998p. 1.
  2. ^ Russell & Norvig 2003, p. 55.
  3. a b c Definition of AI as the study of intelligent agents:
  4. ^ Russell & Norvig 2009, p. 2.
  5. ^ McCorduck 2004, p. 204
  6. ^ Maloof, Mark. “Artificial Intelligence: An Introduction, p. 37” (PDF). georgetown.eduArchived (PDF) from the original on 25 August 2018.
  7. ^ “How AI Is Getting Groundbreaking Changes In Talent Management And HR Tech”. Hackernoon. Archived from the original on 11 September 2019. Retrieved 14 February 2020.
  8. ^ Schank, Roger C. (1991). “Where’s the AI”. AI magazine. Vol. 12 no. 4. p. 38.
  9. ^ Russell & Norvig 2009.
  10. a b “AlphaGo – Google DeepMind”Archived from the original on 10 March 2016.
  11. a b Bowling, Michael; Burch, Neil; Johanson, Michael; Tammelin, Oskari (9 January 2015). “Heads-up limit hold’em poker is solved”Science347 (6218): 145–149. doi:10.1126/science.1259433ISSN 0036-8075PMID 25574016.
  12. ^ Allen, Gregory (April 2020). “Department of Defense Joint AI Center – Understanding AI Technology” (PDF). AI.mil – The official site of the Department of Defense Joint Artificial Intelligence CenterArchived (PDF) from the original on 21 April 2020. Retrieved 25 April 2020.
  13. a b Optimism of early AI: * Herbert Simon quote: Simon 1965, p. 96 quoted in Crevier 1993, p. 109. * Marvin Minsky quote: Minsky 1967, p. 2 quoted in Crevier 1993, p. 109.
  14. a b c Boom of the 1980s: rise of expert systemsFifth Generation ProjectAlveyMCCSCI: * McCorduck 2004, pp. 426–441 * Crevier 1993, pp. 161–162,197–203, 211, 240 * Russell & Norvig 2003, p. 24 * NRC 1999, pp. 210–211 * Newquist 1994, pp. 235–248
  15. a b First AI WinterMansfield AmendmentLighthill report * Crevier 1993, pp. 115–117 * Russell & Norvig 2003, p. 22 * NRC 1999, pp. 212–213 * Howe 1994 * Newquist 1994, pp. 189–201
  16. a b Second AI winter: * McCorduck 2004, pp. 430–435 * Crevier 1993, pp. 209–210 * NRC 1999, pp. 214–216 * Newquist 1994, pp. 301–318
  17. a b c AI becomes hugely successful in the early 21st century * Clark 2015b
  18. ^ Haenlein, Michael; Kaplan, Andreas (2019). “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence”California Management Review61 (4): 5–14. doi:10.1177/0008125619864925ISSN 0008-1256S2CID 199866730.
  19. a b Pamela McCorduck (2004, p. 424) writes of “the rough shattering of AI in subfields—vision, natural language, decision theory, genetic algorithms, robotics … and these with own sub-subfield—that would hardly have anything to say to each other.”
  20. a b c This list of intelligent traits is based on the topics covered by the major AI textbooks, including: * Russell & Norvig 2003 * Luger & Stubblefield 2004 * Poole, Mackworth & Goebel 1998 * Nilsson 1998
  21. ^ Kolata 1982.
  22. ^ Maker 2006.
  23. a b c Biological intelligence vs. intelligence in general:
    • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
    • McCorduck 2004, pp. 100–101, who writes that there are “two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplished, and the other aimed at modeling intelligent processes found in nature, particularly human ones.”
    • Kolata 1982, a paper in Science, which describes McCarthy’s indifference to biological models. Kolata quotes McCarthy as writing: “This is AI, so we don’t care if it’s psychologically real”.[21] McCarthy recently reiterated his position at the AI@50 conference where he said “Artificial intelligence is not, by definition, simulation of human intelligence”.[22]
  24. a b c Neats vs. scruffies: * McCorduck 2004, pp. 421–424, 486–489 * Crevier 1993, p. 168 * Nilsson 1983, pp. 10–11
  25. a b Symbolic vs. sub-symbolic AI: * Nilsson (1998, p. 7), who uses the term “sub-symbolic”.
  26. a b General intelligence (strong AI) is discussed in popular introductions to AI: * Kurzweil 1999 and Kurzweil 2005
  27. ^ See the Dartmouth proposal, under Philosophy, below.
  28. ^ McCorduck 2004, p. 34.
  29. ^ McCorduck 2004, p. xviii.
  30. ^ McCorduck 2004, p. 3.
  31. ^ McCorduck 2004, pp. 340–400.
  32. a b This is a central idea of Pamela McCorduck‘s Machines Who Think. She writes:
    • “I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition.”[28]
    • “Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized.”[29]
    • “Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn’t, we have engaged for a long time in this odd form of self-reproduction.”[30]
    She traces the desire back to its Hellenistic roots and calls it the urge to “forge the Gods.”[31]
  33. ^ “Stephen Hawking believes AI could be mankind’s last accomplishment”BetaNews. 21 October 2016. Archived from the original on 28 August 2017.
  34. ^ Lombardo P, Boehm I, Nairz K (2020). “RadioComics – Santa Claus and the future of radiology”Eur J Radiol122 (1): 108771. doi:10.1016/j.ejrad.2019.108771PMID 31835078.
  35. a b Ford, Martin; Colvin, Geoff (6 September 2015). “Will robots create more jobs than they destroy?”The GuardianArchived from the original on 16 June 2018. Retrieved 13 January 2018.
  36. a b AI applications widely used behind the scenes: * Russell & Norvig 2003, p. 28 * Kurzweil 2005, p. 265 * NRC 1999, pp. 216–222 * Newquist 1994, pp. 189–201
  37. a b AI in myth: * McCorduck 2004, pp. 4–5 * Russell & Norvig 2003, p. 939
  38. ^ AI in early science fiction. * McCorduck 2004, pp. 17–25
  39. ^ Formal reasoning: * Berlinski, David (2000). The Advent of the Algorithm. Harcourt Books. ISBN 978-0-15-601391-8OCLC 46890682Archived from the original on 26 July 2020. Retrieved 22 August 2020.
  40. ^ Turing, Alan (1948), “Machine Intelligence”, in Copeland, B. Jack (ed.), The Essential Turing: The ideas that gave birth to the computer age, Oxford: Oxford University Press, p. 412, ISBN 978-0-19-825080-7
  41. ^ Russell & Norvig 2009, p. 16.
  42. ^ Dartmouth conference: * McCorduck 2004, pp. 111–136 * Crevier 1993, pp. 47–49, who writes “the conference is generally recognized as the official birthdate of the new science.” * Russell & Norvig 2003, p. 17, who call the conference “the birth of artificial intelligence.” * NRC 1999, pp. 200–201
  43. ^ McCarthy, John (1988). “Review of The Question of Artificial Intelligence“. Annals of the History of Computing10 (3): 224–229., collected in McCarthy, John (1996). “10. Review of The Question of Artificial Intelligence“. Defending AI Research: A Collection of Essays and Reviews. CSLI., p. 73, “[O]ne of the reasons for inventing the term “artificial intelligence” was to escape association with “cybernetics”. Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him.”
  44. ^ Hegemony of the Dartmouth conference attendees: * Russell & Norvig 2003, p. 17, who write “for the next 20 years the field would be dominated by these people and their students.” * McCorduck 2004, pp. 129–130
  45. ^ Russell & Norvig 2003, p. 18: “it was astonishing whenever a computer did anything kind of smartish”
  46. ^ Schaeffer J. (2009) Didn’t Samuel Solve That Game?. In: One Jump Ahead. Springer, Boston, MA
  47. ^ Samuel, A. L. (July 1959). “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development3 (3): 210–229. CiteSeerX 10.1.1.368.2254doi:10.1147/rd.33.0210.
  48. ^ “Golden years” of AI (successful symbolic reasoning programs 1956–1973): * McCorduck 2004, pp. 243–252 * Crevier 1993, pp. 52–107 * Moravec 1988, p. 9 * Russell & Norvig 2003, pp. 18–21 The programs described are Arthur Samuel‘s checkers program for the IBM 701Daniel Bobrow‘s STUDENTNewell and Simon‘s Logic Theorist and Terry Winograd‘s SHRDLU.
  49. ^ DARPA pours money into undirected pure research into AI during the 1960s: * McCorduck 2004, p. 131 * Crevier 1993, pp. 51, 64–65 * NRC 1999, pp. 204–205
  50. ^ AI in England: * Howe 1994
  51. ^ Lighthill 1973.
  52. a b Expert systems: * ACM 1998, I.2.1 * Russell & Norvig 2003, pp. 22–24 * Luger & Stubblefield 2004, pp. 227–331 * Nilsson 1998, chpt. 17.4 * McCorduck 2004, pp. 327–335, 434–435 * Crevier 1993, pp. 145–62, 197–203 * Newquist 1994, pp. 155–183
  53. ^ Mead, Carver A.; Ismail, Mohammed (8 May 1989). Analog VLSI Implementation of Neural Systems (PDF). The Kluwer International Series in Engineering and Computer Science. 80. Norwell, MA: Kluwer Academic Publishersdoi:10.1007/978-1-4613-1639-8ISBN 978-1-4613-1639-8. Archived from the original (PDF) on 6 November 2019. Retrieved 24 January 2020.
  54. a b Formal methods are now preferred (“Victory of the neats“): * Russell & Norvig 2003, pp. 25–26 * McCorduck 2004, pp. 486–487
  55. ^ McCorduck 2004, pp. 480–483.
  56. ^ Markoff 2011.
  57. ^ “Ask the AI experts: What’s driving today’s progress in AI?”McKinsey & CompanyArchived from the original on 13 April 2018. Retrieved 13 April 2018.
  58. ^ Fairhead, Harry (26 March 2011) [Update 30 March 2011]. “Kinect’s AI breakthrough explained”I ProgrammerArchived from the original on 1 February 2016.
  59. ^ Rowinski, Dan (15 January 2013). “Virtual Personal Assistants & The Future Of Your Smartphone [Infographic]”ReadWriteArchived from the original on 22 December 2015.
  60. ^ “Artificial intelligence: Google’s AlphaGo beats Go master Lee Se-dol”BBC News. 12 March 2016. Archived from the original on 26 August 2016. Retrieved 1 October 2016.
  61. ^ Metz, Cade (27 May 2017). “After Win in China, AlphaGo’s Designers Explore New AI”WiredArchived from the original on 2 June 2017.
  62. ^ “World’s Go Player Ratings”. May 2017. Archived from the original on 1 April 2017.
  63. ^ “柯洁迎19岁生日 雄踞人类世界排名第一已两年” (in Chinese). May 2017. Archived from the original on 11 August 2017.
  64. ^ “MuZero: Mastering Go, chess, shogi and Atari without rules”Deepmind. Retrieved 1 March 2021.
  65. ^ Steven Borowiec; Tracey Lien (12 March 2016). “AlphaGo beats human Go champ in milestone for artificial intelligence”Los Angeles Times. Retrieved 13 March2016.
  66. ^ Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen; Hassabis, Demis (7 December 2018). “A general reinforcement learning algorithm that masters chess, shogi, and go through self-play”Science362 (6419): 1140–1144. Bibcode:2018Sci…362.1140Sdoi:10.1126/science.aar6404PMID 30523106.
  67. ^ Schrittwieser, Julian; Antonoglou, Ioannis; Hubert, Thomas; Simonyan, Karen; Sifre, Laurent; Schmitt, Simon; Guez, Arthur; Lockhart, Edward; Hassabis, Demis; Graepel, Thore; Lillicrap, Timothy (23 December 2020). “Mastering Atari, Go, chess and shogi by planning with a learned model”Nature588 (7839): 604–609. arXiv:1911.08265doi:10.1038/s41586-020-03051-4ISSN 1476-4687.
  68. ^ Tung, Liam. “Google’s DeepMind artificial intelligence aces Atari gaming challenge”ZDNet. Retrieved 1 March 2021.
  69. ^ Solly, Meilan. “This Poker-Playing A.I. Knows When to Hold ‘Em and When to Fold ‘Em”SmithsonianPluribus has bested poker pros in a series of six-player no-limit Texas Hold’em games, reaching a milestone in artificial intelligence research. It is the first bot to beat humans in a complex multiplayer competition.
  70. a b Clark 2015b. “After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever.”
  71. ^ “Reshaping Business With Artificial Intelligence”MIT Sloan Management ReviewArchived from the original on 19 May 2018. Retrieved 2 May 2018.
  72. ^ Lorica, Ben (18 December 2017). “The state of AI adoption”O’Reilly MediaArchived from the original on 2 May 2018. Retrieved 2 May 2018.
  73. ^ Allen, Gregory (6 February 2019). “Understanding China’s AI Strategy”Center for a New American SecurityArchived from the original on 17 March 2019.
  74. ^ “Review | How two AI superpowers – the U.S. and China – battle for supremacy in the field”The Washington Post. 2 November 2018. Archived from the original on 4 November 2018. Retrieved 4 November 2018.
  75. ^ Anadiotis, George (1 October 2020). “The state of AI in 2020: Democratization, industrialization, and the way to artificial general intelligence”ZDNet. Retrieved 1 March 2021.
  76. ^ Heath, Nick (11 December 2020). “What is AI? Everything you need to know about Artificial Intelligence”ZDNet. Retrieved 1 March 2021.
  77. ^ Kaplan, Andreas; Haenlein, Michael (1 January 2019). “Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence”. Business Horizons62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004.
  78. ^ Domingos 2015, Chapter 5.
  79. ^ Domingos 2015, Chapter 7.
  80. ^ Lindenbaum, M., Markovitch, S., & Rusakov, D. (2004). Selective sampling for nearest neighbor classifiers. Machine learning, 54(2), 125–152.
  81. ^ Domingos 2015, Chapter 1.
  82. a b Intractability and efficiency and the combinatorial explosion: * Russell & Norvig 2003, pp. 9, 21–22
  83. ^ Domingos 2015, Chapter 2, Chapter 3.
  84. ^ Hart, P. E.; Nilsson, N. J.; Raphael, B. (1972). “Correction to “A Formal Basis for the Heuristic Determination of Minimum Cost Paths””. SIGART Newsletter (37): 28–29. doi:10.1145/1056777.1056779S2CID 6386648.
  85. ^ Domingos 2015, Chapter 2, Chapter 4, Chapter 6.
  86. ^ “Can neural network computers learn from experience, and if so, could they ever become what we would call ‘smart’?”Scientific American. 2018. Archived from the original on 25 March 2018. Retrieved 24 March 2018.
  87. ^ Domingos 2015, Chapter 6, Chapter 7.
  88. ^ Domingos 2015, p. 286.
  89. ^ “Single pixel change fools AI programs”BBC News. 3 November 2017. Archived from the original on 22 March 2018. Retrieved 12 March 2018.
  90. ^ “AI Has a Hallucination Problem That’s Proving Tough to Fix”WIRED. 2018. Archived from the original on 12 March 2018. Retrieved 12 March 2018.
  91. ^ “Cultivating Common Sense | DiscoverMagazine.com”Discover Magazine. 2017. Archived from the original on 25 March 2018. Retrieved 24 March 2018.
  92. ^ Davis, Ernest; Marcus, Gary (24 August 2015). “Commonsense reasoning and commonsense knowledge in artificial intelligence”Communications of the ACM58 (9): 92–103. doi:10.1145/2701413S2CID 13583137Archived from the original on 22 August 2020. Retrieved 6 April 2020.
  93. ^ Winograd, Terry (January 1972). “Understanding natural language”. Cognitive Psychology3 (1): 1–191. doi:10.1016/0010-0285(72)90002-3.
  94. ^ “Don’t worry: Autonomous cars aren’t coming tomorrow (or next year)”Autoweek. 2016. Archived from the original on 25 March 2018. Retrieved 24 March 2018.
  95. ^ Knight, Will (2017). “Boston may be famous for bad drivers, but it’s the testing ground for a smarter self-driving car”MIT Technology ReviewArchived from the original on 22 August 2020. Retrieved 27 March 2018.
  96. ^ Prakken, Henry (31 August 2017). “On the problem of making autonomous vehicles conform to traffic law”Artificial Intelligence and Law25 (3): 341–363. doi:10.1007/s10506-017-9210-0.
  97. a b Lieto, Antonio; Lebiere, Christian; Oltramari, Alessandro (May 2018). “The knowledge level in cognitive architectures: Current limitations and possible developments”. Cognitive Systems Research48: 39–55. doi:10.1016/j.cogsys.2017.05.001hdl:2318/1665207S2CID 206868967.
  98. ^ Problem solving, puzzle solving, game playing and deduction: * Russell & Norvig 2003, chpt. 3–9, * Poole, Mackworth & Goebel 1998, chpt. 2,3,7,9, * Luger & Stubblefield 2004, chpt. 3,4,6,8, * Nilsson 1998, chpt. 7–12
  99. ^ Uncertain reasoning: * Russell & Norvig 2003, pp. 452–644, * Poole, Mackworth & Goebel 1998, pp. 345–395, * Luger & Stubblefield 2004, pp. 333–381, * Nilsson 1998, chpt. 19
  100. ^ Psychological evidence of sub-symbolic reasoning: * Wason & Shapiro (1966)showed that people do poorly on completely abstract problems, but if the problem is restated to allow the use of intuitive social intelligence, performance dramatically improves. (See Wason selection task) * Kahneman, Slovic & Tversky (1982) have shown that people are terrible at elementary problems that involve uncertain reasoning. (See list of cognitive biases for several examples). * Lakoff & Núñez (2000) have controversially argued that even our skills at mathematics depend on knowledge and skills that come from “the body”, i.e. sensorimotor and perceptual skills. (See Where Mathematics Comes From)
  101. ^ Knowledge representation: * ACM 1998, I.2.4, * Russell & Norvig 2003, pp. 320–363, * Poole, Mackworth & Goebel 1998, pp. 23–46, 69–81, 169–196, 235–277, 281–298, 319–345, * Luger & Stubblefield 2004, pp. 227–243, * Nilsson 1998, chpt. 18
  102. ^ Knowledge engineering: * Russell & Norvig 2003, pp. 260–266, * Poole, Mackworth & Goebel 1998, pp. 199–233, * Nilsson 1998, chpt. ≈17.1–17.4
  103. ^ Representing categories and relations: Semantic networksdescription logicsinheritance (including frames and scripts): * Russell & Norvig 2003, pp. 349–354, * Poole, Mackworth & Goebel 1998, pp. 174–177, * Luger & Stubblefield 2004, pp. 248–258, * Nilsson 1998, chpt. 18.3
  104. ^ Representing events and time:Situation calculusevent calculusfluent calculus(including solving the frame problem): * Russell & Norvig 2003, pp. 328–341, * Poole, Mackworth & Goebel 1998, pp. 281–298, * Nilsson 1998, chpt. 18.2
  105. ^ Causal calculus: * Poole, Mackworth & Goebel 1998, pp. 335–337
  106. ^ Representing knowledge about knowledge: Belief calculus, modal logics: * Russell & Norvig 2003, pp. 341–344, * Poole, Mackworth & Goebel 1998, pp. 275–277
  107. ^ Sikos, Leslie F. (June 2017). Description Logics in Multimedia Reasoning. Cham: Springer. doi:10.1007/978-3-319-54066-5ISBN 978-3-319-54066-5S2CID 3180114Archived from the original on 29 August 2017.
  108. ^ Ontology: * Russell & Norvig 2003, pp. 320–328
  109. ^ Smoliar, Stephen W.; Zhang, HongJiang (1994). “Content based video indexing and retrieval”. IEEE Multimedia1 (2): 62–72. doi:10.1109/93.311653S2CID 32710913.
  110. ^ Neumann, Bernd; Möller, Ralf (January 2008). “On scene interpretation with description logics”. Image and Vision Computing26 (1): 82–101. doi:10.1016/j.imavis.2007.08.013.
  111. ^ Kuperman, G. J.; Reichley, R. M.; Bailey, T. C. (1 July 2006). “Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations”Journal of the American Medical Informatics Association13(4): 369–371. doi:10.1197/jamia.M2055PMC 1513681PMID 16622160.
  112. ^ MCGARRY, KEN (1 December 2005). “A survey of interestingness measures for knowledge discovery”. The Knowledge Engineering Review20 (1): 39–61. doi:10.1017/S0269888905000408S2CID 14987656.
  113. ^ Bertini, M; Del Bimbo, A; Torniai, C (2006). “Automatic annotation and semantic retrieval of video sequences using multimedia ontologies”. MM ’06 Proceedings of the 14th ACM international conference on Multimedia. 14th ACM international conference on Multimedia. Santa Barbara: ACM. pp. 679–682.
  114. ^ Qualification problem: * McCarthy & Hayes 1969 * Russell & Norvig 2003[page needed] While McCarthy was primarily concerned with issues in the logical representation of actions, Russell & Norvig 2003 apply the term to the more general issue of default reasoning in the vast network of assumptions underlying all our commonsense knowledge.
  115. ^ Default reasoning and default logicnon-monotonic logicscircumscriptionclosed world assumptionabduction (Poole et al. places abduction under “default reasoning”. Luger et al. places this under “uncertain reasoning”): * Russell & Norvig 2003, pp. 354–360, * Poole, Mackworth & Goebel 1998, pp. 248–256, 323–335, * Luger & Stubblefield 2004, pp. 335–363, * Nilsson 1998, ~18.3.3
  116. ^ Breadth of commonsense knowledge: * Russell & Norvig 2003, p. 21, * Crevier 1993, pp. 113–114, * Moravec 1988, p. 13, * Lenat & Guha 1989 (Introduction)
  117. ^ Dreyfus & Dreyfus 1986.
  118. ^ Gladwell 2005.
  119. a b Expert knowledge as embodied intuition: * Dreyfus & Dreyfus 1986 (Hubert Dreyfus is a philosopher and critic of AI who was among the first to argue that most useful human knowledge was encoded sub-symbolically. See Dreyfus’ critique of AI) * Gladwell 2005 (Gladwell’s Blink is a popular introduction to sub-symbolic reasoning and knowledge.) * Hawkins & Blakeslee 2005 (Hawkins argues that sub-symbolic knowledge should be the primary focus of AI research.)
  120. ^ Planning: * ACM 1998, ~I.2.8, * Russell & Norvig 2003, pp. 375–459, * Poole, Mackworth & Goebel 1998, pp. 281–316, * Luger & Stubblefield 2004, pp. 314–329, * Nilsson 1998, chpt. 10.1–2, 22
  121. ^ Information value theory: * Russell & Norvig 2003, pp. 600–604
  122. ^ Classical planning: * Russell & Norvig 2003, pp. 375–430, * Poole, Mackworth & Goebel 1998, pp. 281–315, * Luger & Stubblefield 2004, pp. 314–329, * Nilsson 1998, chpt. 10.1–2, 22
  123. ^ Planning and acting in non-deterministic domains: conditional planning, execution monitoring, replanning and continuous planning: * Russell & Norvig 2003, pp. 430–449
  124. ^ Multi-agent planning and emergent behavior: * Russell & Norvig 2003, pp. 449–455
  125. ^ Turing 1950.
  126. ^ Solomonoff 1956.
  127. a b Learning: * ACM 1998, I.2.6, * Russell & Norvig 2003, pp. 649–788, * Poole, Mackworth & Goebel 1998, pp. 397–438, * Luger & Stubblefield 2004, pp. 385–542, * Nilsson 1998, chpt. 3.3, 10.3, 17.5, 20
  128. ^ Jordan, M. I.; Mitchell, T. M. (16 July 2015). “Machine learning: Trends, perspectives, and prospects”. Science349 (6245): 255–260. Bibcode:2015Sci…349..255Jdoi:10.1126/science.aaa8415PMID 26185243S2CID 677218.
  129. ^ Reinforcement learning: * Russell & Norvig 2003, pp. 763–788 * Luger & Stubblefield 2004, pp. 442–449
  130. ^ Natural language processing: * ACM 1998, I.2.7 * Russell & Norvig 2003, pp. 790–831 * Poole, Mackworth & Goebel 1998, pp. 91–104 * Luger & Stubblefield 2004, pp. 591–632
  131. ^ “Versatile question answering systems: seeing in synthesis” Archived 1 February 2016 at the Wayback Machine, Mittal et al., IJIIDS, 5(2), 119–142, 2011
  132. ^ Applications of natural language processing, including information retrieval (i.e. text mining) and machine translation: * Russell & Norvig 2003, pp. 840–857, * Luger & Stubblefield 2004, pp. 623–630
  133. ^ Cambria, Erik; White, Bebo (May 2014). “Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]”. IEEE Computational Intelligence Magazine9 (2): 48–57. doi:10.1109/MCI.2014.2307227S2CID 206451986.
  134. ^ Vincent, James (7 November 2019). “OpenAI has published the text-generating AI it said was too dangerous to share”The VergeArchived from the original on 11 June 2020. Retrieved 11 June 2020.
  135. ^ Machine perception: * Russell & Norvig 2003, pp. 537–581, 863–898 * Nilsson 1998, ~chpt. 6
  136. ^ Speech recognition: * ACM 1998, ~I.2.7 * Russell & Norvig 2003, pp. 568–578
  137. ^ Object recognition: * Russell & Norvig 2003, pp. 885–892
  138. ^ Computer vision: * ACM 1998, I.2.10 * Russell & Norvig 2003, pp. 863–898 * Nilsson 1998, chpt. 6
  139. ^ Robotics: * ACM 1998, I.2.9, * Russell & Norvig 2003, pp. 901–942, * Poole, Mackworth & Goebel 1998, pp. 443–460
  140. ^ Moving and configuration space: * Russell & Norvig 2003, pp. 916–932
  141. ^ Tecuci 2012.
  142. ^ Robotic mapping (localization, etc): * Russell & Norvig 2003, pp. 908–915
  143. ^ Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide; Neira, Jose; Reid, Ian; Leonard, John J. (December 2016). “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age”. IEEE Transactions on Robotics32 (6): 1309–1332. arXiv:1606.05830Bibcode:2016arXiv160605830Cdoi:10.1109/TRO.2016.2624754S2CID 2596787.
  144. ^ Moravec 1988, p. 15.
  145. ^ Chan, Szu Ping (15 November 2015). “This is what will happen when robots take over the world”Archived from the original on 24 April 2018. Retrieved 23 April2018.
  146. ^ “IKEA furniture and the limits of AI”The Economist. 2018. Archived from the original on 24 April 2018. Retrieved 24 April 2018.
  147. ^ “Kismet”. MIT Artificial Intelligence Laboratory, Humanoid Robotics Group. Archived from the original on 17 October 2014. Retrieved 25 October 2014.
  148. ^ Thompson, Derek (2018). “What Jobs Will the Robots Take?”The AtlanticArchived from the original on 24 April 2018. Retrieved 24 April 2018.
  149. ^ Scassellati, Brian (2002). “Theory of mind for a humanoid robot”. Autonomous Robots12 (1): 13–24. doi:10.1023/A:1013298507114S2CID 1979315.
  150. ^ Cao, Yongcan; Yu, Wenwu; Ren, Wei; Chen, Guanrong (February 2013). “An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination”. IEEE Transactions on Industrial Informatics9 (1): 427–438. arXiv:1207.3231doi:10.1109/TII.2012.2219061S2CID 9588126.
  151. ^ Thro 1993.
  152. ^ Edelson 1991.
  153. ^ Tao & Tan 2005.
  154. ^ Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). “A review of affective computing: From unimodal analysis to multimodal fusion”. Information Fusion37: 98–125. doi:10.1016/j.inffus.2017.02.003hdl:1893/25490.
  155. ^ Emotion and affective computing: * Minsky 2006
  156. ^ Waddell, Kaveh (2018). “Chatbots Have Entered the Uncanny Valley”The AtlanticArchived from the original on 24 April 2018. Retrieved 24 April 2018.
  157. ^ Pennachin, C.; Goertzel, B. (2007). “Contemporary Approaches to Artificial General Intelligence”. Artificial General Intelligence. Cognitive Technologies. Berlin, Heidelberg: Springer. doi:10.1007/978-3-540-68677-4_1ISBN 978-3-540-23733-4.
  158. a b c Roberts, Jacob (2016). “Thinking Machines: The Search for Artificial Intelligence”Distillations. Vol. 2 no. 2. pp. 14–23. Archived from the original on 19 August 2018. Retrieved 20 March 2018.
  159. ^ “The superhero of artificial intelligence: can this genius keep it in check?”the Guardian. 16 February 2016. Archived from the original on 23 April 2018. Retrieved 26 April 2018.
  160. ^ Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis (26 February 2015). “Human-level control through deep reinforcement learning”. Nature518 (7540): 529–533. Bibcode:2015Natur.518..529Mdoi:10.1038/nature14236PMID 25719670S2CID 205242740.
  161. ^ Sample, Ian (14 March 2017). “Google’s DeepMind makes AI program that can learn like a human”the GuardianArchived from the original on 26 April 2018. Retrieved 26 April 2018.
  162. ^ “From not working to neural networking”The Economist. 2016. Archived from the original on 31 December 2016. Retrieved 26 April 2018.
  163. ^ Russell & Norvig 2009, Chapter 27. AI: The Present and Future.
  164. ^ Domingos 2015, Chapter 9. The Pieces of the Puzzle Fall into Place.
  165. a b Artificial brain arguments: AI requires a simulation of the operation of the human brain * Russell & Norvig 2003, p. 957 * Crevier 1993, pp. 271 & 279 A few of the people who make some form of the argument: * Moravec 1988 * Kurzweil 2005, p. 262 * Hawkins & Blakeslee 2005 The most extreme form of this argument (the brain replacement scenario) was put forward by Clark Glymour in the mid-1970s and was touched on by Zenon Pylyshyn and John Searle in 1980.
  166. ^ Goertzel, Ben; Lian, Ruiting; Arel, Itamar; de Garis, Hugo; Chen, Shuo (December 2010). “A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures”. Neurocomputing74 (1–3): 30–49. doi:10.1016/j.neucom.2010.08.012.
  167. ^ Nilsson 1983, p. 10.
  168. ^ AI’s immediate precursors: * McCorduck 2004, pp. 51–107 * Crevier 1993, pp. 27–32 * Russell & Norvig 2003, pp. 15, 940 * Moravec 1988, p. 3
  169. ^ Haugeland 1985, pp. 112–117
  170. ^ Cognitive simulation, Newell and Simon, AI at CMU (then called Carnegie Tech): * McCorduck 2004, pp. 139–179, 245–250, 322–323 (EPAM) * Crevier 1993, pp. 145–149
  171. ^ Soar (history): * McCorduck 2004, pp. 450–451 * Crevier 1993, pp. 258–263
  172. ^ McCarthy and AI research at SAIL and SRI International: * McCorduck 2004, pp. 251–259 * Crevier 1993
  173. ^ AI research at Edinburgh and in France, birth of Prolog: * Crevier 1993, pp. 193–196 * Howe 1994
  174. ^ AI at MIT under Marvin Minsky in the 1960s : * McCorduck 2004, pp. 259–305 * Crevier 1993, pp. 83–102, 163–176 * Russell & Norvig 2003, p. 19
  175. ^ Cyc: * McCorduck 2004, p. 489, who calls it “a determinedly scruffy enterprise” * Crevier 1993, pp. 239–243 * Russell & Norvig 2003, p. 363−365 * Lenat & Guha 1989
  176. ^ Knowledge revolution: * McCorduck 2004, pp. 266–276, 298–300, 314, 421 * Russell & Norvig 2003, pp. 22–23
  177. ^ Frederick, Hayes-Roth; William, Murray; Leonard, Adelman. “Expert systems”. AccessSciencedoi:10.1036/1097-8542.248550.
  178. ^ Embodied approaches to AI: * McCorduck 2004, pp. 454–462 * Brooks 1990 * Moravec 1988
  179. ^ Weng et al. 2001.
  180. ^ Lungarella et al. 2003.
  181. ^ Asada et al. 2009.
  182. ^ Oudeyer 2010.
  183. ^ Revival of connectionism: * Crevier 1993, pp. 214–215 * Russell & Norvig 2003, p. 25
  184. ^ Computational intelligence * IEEE Computational Intelligence SocietyArchived 9 May 2008 at the Wayback Machine
  185. ^ Hutson, Matthew (16 February 2018). “Artificial intelligence faces reproducibility crisis”Science. pp. 725–726. Bibcode:2018Sci…359..725Hdoi:10.1126/science.359.6377.725Archived from the original on 29 April 2018. Retrieved 28 April 2018.
  186. ^ Norvig 2012.
  187. ^ Langley 2011.
  188. ^ Katz 2012.
  189. ^ The intelligent agent paradigm: * Russell & Norvig 2003, pp. 27, 32–58, 968–972 * Poole, Mackworth & Goebel 1998, pp. 7–21 * Luger & Stubblefield 2004, pp. 235–240 * Hutter 2005, pp. 125–126 The definition used in this article, in terms of goals, actions, perception and environment, is due to Russell & Norvig (2003). Other definitions also include knowledge and learning as additional criteria.
  190. ^ Agent architectureshybrid intelligent systems: * Russell & Norvig (2003, pp. 27, 932, 970–972) * Nilsson (1998, chpt. 25)
  191. ^ Hierarchical control system: * Albus 2002
  192. ^ Lieto, Antonio; Bhatt, Mehul; Oltramari, Alessandro; Vernon, David (May 2018). “The role of cognitive architectures in general artificial intelligence”. Cognitive Systems Research48: 1–3. doi:10.1016/j.cogsys.2017.08.003hdl:2318/1665249S2CID 36189683.
  193. a b Russell & Norvig 2009, p. 1.
  194. a b White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1. Archived (PDF)from the original on 20 February 2020. Retrieved 20 February 2020.
  195. ^ “AI set to exceed human brain power”CNN. 9 August 2006. Archived from the original on 19 February 2008.
  196. ^ Using AI to predict flight delays Archived 20 November 2018 at the Wayback Machine, Ishti.org.
  197. ^ N. Aletras; D. Tsarapatsanis; D. Preotiuc-Pietro; V. Lampos (2016). “Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective”PeerJ Computer Science2: e93. doi:10.7717/peerj-cs.93.
  198. ^ “The Economist Explains: Why firms are piling into artificial intelligence”The Economist. 31 March 2016. Archived from the original on 8 May 2016. Retrieved 19 May 2016.
  199. ^ Lohr, Steve (28 February 2016). “The Promise of Artificial Intelligence Unfolds in Small Steps”The New York TimesArchived from the original on 29 February 2016. Retrieved 29 February 2016.
  200. ^ Frangoul, Anmar (14 June 2019). “A Californian business is using A.I. to change the way we think about energy storage”CNBCArchived from the original on 25 July 2020. Retrieved 5 November 2019.
  201. ^ Wakefield, Jane (15 June 2016). “Social media ‘outstrips TV’ as news source for young people”BBC NewsArchived from the original on 24 June 2016.
  202. ^ Smith, Mark (22 July 2016). “So you think you chose to read this article?”BBC NewsArchived from the original on 25 July 2016.
  203. ^ Brown, Eileen. “Half of Americans do not believe deepfake news could target them online”ZDNetArchived from the original on 6 November 2019. Retrieved 3 December 2019.
  204. ^ Zola, Andrew (12 April 2019). “Interview Prep: 40 Artificial Intelligence Questions”Springboard Blog.
  205. ^ The Turing test:
    Turing’s original publication: * Turing 1950 Historical influence and philosophical implications: * Haugeland 1985, pp. 6–9 * Crevier 1993, p. 24 * McCorduck 2004, pp. 70–71 * Russell & Norvig 2003, pp. 2–3 and 948
  206. ^ Dartmouth proposal: * McCarthy et al. 1955 (the original proposal) * Crevier 1993, p. 49 (historical significance)
  207. ^ The physical symbol systems hypothesis: * Newell & Simon 1976, p. 116 * McCorduck 2004, p. 153 * Russell & Norvig 2003, p. 18
  208. ^ Dreyfus 1992, p. 156.
  209. ^ Dreyfus’ critique of artificial intelligence: * Dreyfus 1972Dreyfus & Dreyfus 1986 * Crevier 1993, pp. 120–132 * McCorduck 2004, pp. 211–239 * Russell & Norvig 2003, pp. 950–952,
  210. ^ Gödel 1951: in this lecture, Kurt Gödel uses the incompleteness theorem to arrive at the following disjunction: (a) the human mind is not a consistent finite machine, or (b) there exist Diophantine equations for which it cannot decide whether solutions exist. Gödel finds (b) implausible, and thus seems to have believed the human mind was not equivalent to a finite machine, i.e., its power exceeded that of any finite machine. He recognized that this was only a conjecture, since one could never disprove (b). Yet he considered the disjunctive conclusion to be a “certain fact”.
  211. ^ The Mathematical Objection: * Russell & Norvig 2003, p. 949 * McCorduck 2004, pp. 448–449 Making the Mathematical Objection: * Lucas 1961 * Penrose 1989Refuting Mathematical Objection: * Turing 1950 under “(2) The Mathematical Objection” * Hofstadter 1979 Background: * Gödel 1931, Church 1936, Kleene 1935, Turing 1937
  212. ^ Graham Oppy (20 January 2015). “Gödel’s Incompleteness Theorems”Stanford Encyclopedia of PhilosophyArchived from the original on 22 April 2016. Retrieved 27 April 2016. These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.
  213. ^ Stuart J. RussellPeter Norvig (2010). “26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection”. Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, NJ: Prentice HallISBN 978-0-13-604259-4even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.
  214. ^ Mark Colyvan. An introduction to the philosophy of mathematics. Cambridge University Press, 2012. From 2.2.2, ‘Philosophical significance of Gödel’s incompleteness results’: “The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail.”
  215. ^ Iphofen, Ron; Kritikos, Mihalis (3 January 2019). “Regulating artificial intelligence and robotics: ethics by design in a digital society”. Contemporary Social Science: 1–15. doi:10.1080/21582041.2018.1563803ISSN 2158-2041.
  216. ^ “Ethical AI Learns Human Rights Framework”Voice of AmericaArchivedfrom the original on 11 November 2019. Retrieved 10 November 2019.
  217. ^ Crevier 1993, pp. 132–144.
  218. ^ Joseph Weizenbaum‘s critique of AI: * Weizenbaum 1976 * Crevier 1993, pp. 132–144 * McCorduck 2004, pp. 356–373 * Russell & Norvig 2003, p. 961 Weizenbaum (the AI researcher who developed the first chatterbot program, ELIZA) argued in 1976 that the misuse of artificial intelligence has the potential to devalue human life.
  219. ^ Wallach, Wendell (2010). Moral Machines. Oxford University Press.
  220. ^ Wallach 2010, pp. 37–54.
  221. ^ Wallach 2010, pp. 55–73.
  222. ^ Wallach 2010, “Introduction”.
  223. a b Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.
  224. a b “Machine Ethics”aaai.org. Archived from the original on 29 November 2014.
  225. ^ Rubin, Charles (Spring 2003). “Artificial Intelligence and Human Nature”The New Atlantis1: 88–100. Archived from the original on 11 June 2012.
  226. ^ Brooks, Rodney (10 November 2014). “artificial intelligence is a tool, not a threat”. Archived from the original on 12 November 2014.
  227. ^ “Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence”Observer. 19 August 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  228. ^ Chalmers, David (1995). “Facing up to the problem of consciousness”Journal of Consciousness Studies2 (3): 200–219. Archived from the original on 8 March 2005. Retrieved 11 October 2018. See also this link Archived 8 April 2011 at the Wayback Machine
  229. ^ Horst, Steven, (2005) “The Computational Theory of Mind” Archived 11 September 2018 at the Wayback Machine in The Stanford Encyclopedia of Philosophy
  230. ^ Searle 1980, p. 1.
  231. ^ Searle’s Chinese room argument: * Searle 1980. Searle’s original presentation of the thought experiment. * Searle 1999. Discussion: * Russell & Norvig 2003, pp. 958–960 * McCorduck 2004, pp. 443–445 * Crevier 1993, pp. 269–271
  232. ^ Robot rights: * Russell & Norvig 2003, p. 964 Prematurity of: * Henderson 2007 In fiction: * McCorduck (2004, pp. 190–25) discusses Frankenstein and identifies the key ethical issues as scientific hubris and the suffering of the monster, i.e. robot rights.
  233. ^ “Robots could demand legal rights”BBC News. 21 December 2006. Archivedfrom the original on 15 October 2019. Retrieved 3 February 2011.
  234. ^ Evans, Woody (2015). “Posthuman Rights: Dimensions of Transhuman Worlds”Teknokultura12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
  235. ^ maschafilm. “Content: Plug & Pray Film – Artificial Intelligence – Robots -“plugandpray-film.deArchived from the original on 12 February 2016.
  236. ^ Omohundro, Steve (2008). The Nature of Self-Improving Artificial Intelligence. presented and distributed at the 2007 Singularity Summit, San Francisco, CA.
  237. a b c Technological singularity: * Vinge 1993 * Kurzweil 2005 * Russell & Norvig 2003, p. 963
  238. ^ Transhumanism: * Moravec 1988 * Kurzweil 2005 * Russell & Norvig 2003, p. 963
  239. ^ AI as evolution: * Edward Fredkin is quoted in McCorduck (2004, p. 401). * Butler 1863 * Dyson 1998
  240. ^ “Robots and Artificial Intelligence”www.igmchicago.orgArchived from the original on 1 May 2019. Retrieved 3 July 2019.
  241. ^ “Sizing the prize: PwC’s Global AI Study—Exploiting the AI Revolution” (PDF). Retrieved 11 November 2020.
  242. ^ E McGaughey, ‘Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy’ (2018) SSRN, part 2(3) Archived 24 May 2018 at the Wayback Machine
  243. ^ “Automation and anxiety”The Economist. 9 May 2015. Archived from the original on 12 January 2018. Retrieved 13 January 2018.
  244. ^ Lohr, Steve (2017). “Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says”The New York TimesArchived from the original on 14 January 2018. Retrieved 13 January 2018.
  245. ^ Frey, Carl Benedikt; Osborne, Michael A (1 January 2017). “The future of employment: How susceptible are jobs to computerisation?”. Technological Forecasting and Social Change114: 254–280. CiteSeerX 10.1.1.395.416doi:10.1016/j.techfore.2016.08.019ISSN 0040-1625.
  246. ^ Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. “The risk of automation for jobs in OECD countries: A comparative analysis.” OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.
  247. ^ Mahdawi, Arwa (26 June 2017). “What jobs will still be around in 20 years? Read this to prepare your future”The GuardianArchived from the original on 14 January 2018. Retrieved 13 January 2018.
  248. ^ Simon, Matt (1 April 2019). “Andrew Yang’s Presidential Bid Is So Very 21st Century”WiredArchived from the original on 24 June 2019. Retrieved 2 May2019 – via www.wired.com.
  249. ^ “Five experts share what scares them the most about AI”. 5 September 2018. Archived from the original on 8 December 2019. Retrieved 8 December 2019.
  250. ^ Russel, Stuart., Daniel Dewey, and Max Tegmark. Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine 36:4 (2015). 8 December 2016.
  251. ^ “Commentary: Bad news. Artificial intelligence is biased”CNA. 12 January 2019. Archived from the original on 12 January 2019. Retrieved 19 June 2020.
  252. ^ Jeff Larson, Julia Angwin (23 May 2016). “How We Analyzed the COMPAS Recidivism Algorithm”ProPublicaArchived from the original on 29 April 2019. Retrieved 19 June 2020.
  253. ^ Rawlinson, Kevin (29 January 2015). “Microsoft’s Bill Gates insists AI is a threat”BBC NewsArchived from the original on 29 January 2015. Retrieved 30 January 2015.
  254. ^ Holley, Peter (28 January 2015). “Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned'”The Washington PostISSN 0190-8286Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  255. ^ Gibbs, Samuel (27 October 2014). “Elon Musk: artificial intelligence is our biggest existential threat”The GuardianArchived from the original on 30 October 2015. Retrieved 30 October 2015.
  256. ^ Churm, Philip Andrew (14 May 2019). “Yuval Noah Harari talks politics, technology and migration”euronews. Retrieved 15 November 2020.
  257. ^ Cellan-Jones, Rory (2 December 2014). “Stephen Hawking warns artificial intelligence could end mankind”BBC NewsArchived from the original on 30 October 2015. Retrieved 30 October 2015.
  258. ^ Bostrom, Nick (2015). “What happens when our computers get smarter than we are?”TED (conference)Archived from the original on 25 July 2020. Retrieved 30 January 2020.
  259. a b Russell, Stuart (8 October 2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3OCLC 1083694322.
  260. ^ Post, Washington. “Tech titans like Elon Musk are spending $1 billion to save you from terminators”Archived from the original on 7 June 2016.
  261. ^ Müller, Vincent C.; Bostrom, Nick (2014). “Future Progress in Artificial Intelligence: A Poll Among Experts” (PDF). AI Matters1 (1): 9–11. doi:10.1145/2639475.2639478S2CID 8510016Archived (PDF) from the original on 15 January 2016.
  262. ^ “Oracle CEO Mark Hurd sees no reason to fear ERP AI”SearchERPArchived from the original on 6 May 2019. Retrieved 6 May 2019.
  263. ^ “Mark Zuckerberg responds to Elon Musk’s paranoia about AI: ‘AI is going to… help keep our communities safe.'”Business Insider. 25 May 2018. Archivedfrom the original on 6 May 2019. Retrieved 6 May 2019.
  264. ^ “The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers”Tech InsiderArchived from the original on 30 October 2015. Retrieved 30 October 2015.
  265. ^ Clark 2015a.
  266. ^ “Elon Musk Is Donating $10M Of His Own Money To Artificial Intelligence Research”Fast Company. 15 January 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  267. ^ “Is artificial intelligence really an existential threat to humanity?”Bulletin of the Atomic Scientists. 9 August 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  268. ^ “The case against killer robots, from a guy actually working on artificial intelligence”Fusion.netArchived from the original on 4 February 2016. Retrieved 31 January 2016.
  269. ^ “Will artificial intelligence destroy humanity? Here are 5 reasons not to worry”Vox. 22 August 2014. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  270. ^ Berryhill, Jamie; Heang, Kévin Kok; Clogher, Rob; McBride, Keegan (2019). Hello, World: Artificial Intelligence and its Use in the Public Sector (PDF). Paris: OECD Observatory of Public Sector Innovation. Archived (PDF) from the original on 20 December 2019. Retrieved 9 August 2020.
  271. ^ Barfield, Woodrow; Pagallo, Ugo (2018). Research handbook on the law of artificial intelligence. Cheltenham, UK. ISBN 978-1-78643-904-8OCLC 1039480085.
  272. ^ Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. Regulation of artificial intelligence in selected jurisdictionsLCCN 2019668143OCLC 1110727808.
  273. ^ Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (24 July 2018). “Artificial Intelligence and the Public Sector—Applications and Challenges”International Journal of Public Administration42 (7): 596–615. doi:10.1080/01900692.2018.1498103ISSN 0190-0692S2CID 158829602Archived from the original on 18 August 2020. Retrieved 22 August 2020.
  274. ^ Buiten, Miriam C (2019). “Towards Intelligent Regulation of Artificial Intelligence”European Journal of Risk Regulation10 (1): 41–59. doi:10.1017/err.2019.8ISSN 1867-299X.
  275. ^ Sotala, Kaj; Yampolskiy, Roman V (19 December 2014). “Responses to catastrophic AGI risk: a survey”Physica Scripta90 (1): 018001. doi:10.1088/0031-8949/90/1/018001ISSN 0031-8949.
  276. ^ Buttazzo, G. (July 2001). “Artificial consciousness: Utopia or real possibility?”. Computer34 (7): 24–30. doi:10.1109/2.933500.
  277. ^ Anderson, Susan Leigh. “Asimov’s “three laws of robotics” and machine metaethics.” AI & Society 22.4 (2008): 477–493.
  278. ^ McCauley, Lee (2007). “AI armageddon and the three laws of robotics”. Ethics and Information Technology9 (2): 153–164. CiteSeerX 10.1.1.85.8904doi:10.1007/s10676-007-9138-2S2CID 37272949.
  279. ^ Galvan, Jill (1 January 1997). “Entering the Posthuman Collective in Philip K. Dick’s “Do Androids Dream of Electric Sheep?””. Science Fiction Studies24 (3): 413–429. JSTOR 4240644.

AI textbooks

History of AI

Other sources

Further reading

  • DH Author, ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’ (2015) 29(3) Journal of Economic Perspectives 3.
  • Boden, MargaretMind As MachineOxford University Press, 2006.
  • Cukier, Kenneth, “Ready for Robots? How to Think about the Future of AI”, Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called “Dyson’s Law”) that “Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.” (p. 197.) Computer scientist Alex Pentland writes: “Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force.” (p. 198.)
  • Domingos, Pedro, “Our Digital Doubles: AI will serve our species, not control it”, Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93.
  • Gopnik, Alison, “Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn”, Scientific American, vol. 316, no. 6 (June 2017), pp. 60–65.
  • Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press.
  • Koch, Christof, “Proust among the Machines”, Scientific American, vol. 321, no. 6 (December 2019), pp. 46–49. Christof Koch doubts the possibility of “intelligent” machines attaining consciousness, because “[e]ven the most sophisticated brain simulations are unlikely to produce conscious feelings.” (p. 48.) According to Koch, “Whether machines can become sentient [is important] for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to… humans. Per GNW [the Global Neuronal Workspacetheory], they turn from mere objects into subjects… with a point of view…. Once computers’ cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible – the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself.” (p. 49.)
  • Marcus, Gary, “Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind”, Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable disambiguation. An example is the “pronoun disambiguation problem”: a machine has no way of determining to whom or what a pronoun in a sentence refers. (p. 61.)
  • E McGaughey, ‘Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy’ (2018) SSRN, part 2(3) Archived 24 May 2018 at the Wayback Machine.
  • George Musser, “Artificial Imagination: How machines could learn creativity and common sense, among other human qualities”, Scientific American, vol. 320, no. 5 (May 2019), pp. 58–63.
  • Myers, Courtney Boyd ed. (2009). “The AI Report” Archived 29 July 2017 at the Wayback MachineForbes June 2009
  • Raphael, Bertram (1976). The Thinking Computer. W.H.Freeman and Company. ISBN 978-0-7167-0723-3Archived from the original on 26 July 2020. Retrieved 22 August 2020.
  • Scharre, Paul, “Killer Apps: The Real Dangers of an AI Arms Race”, Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. “Today’s AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater.” (p. 140.)
  • Serenko, Alexander (2010). “The development of an AI journal ranking based on the revealed preference approach” (PDF). Journal of Informetrics4 (4): 447–459. doi:10.1016/j.joi.2010.04.001Archived (PDF) from the original on 4 October 2013. Retrieved 24 August 2013.
  • Serenko, Alexander; Michael Dohan (2011). “Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence”(PDF). Journal of Informetrics5 (4): 629–649. doi:10.1016/j.joi.2011.06.002Archived (PDF) from the original on 4 October 2013. Retrieved 12 September 2013.
  • Tom Simonite (29 December 2014). “2014 in Computing: Breakthroughs in Artificial Intelligence”MIT Technology Review.
  • Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994.
  • Taylor, Paul, “Insanely Complicated, Hopelessly Inadequate” (review of Brian Cantwell SmithThe Promise of Artificial Intelligence: Reckoning and Judgment, MIT, October 2019, ISBN 978 0 262 04304 5, 157 pp.; Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, Ballantine, September 2019, ISBN 978 1 5247 4825 8, 304 pp.; Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect, Penguin, May 2019, ISBN 978 0 14 198241 0, 418 pp.), London Review of Books, vol. 43, no. 2 (21 January 2021), pp. 37–39. Paul Taylor writes (p. 39): “Perhaps there is a limit to what a computer can do without knowing that it is manipulating imperfect representations of an external reality.”
  • Tooze, Adam, “Democracy and Its Discontents”, The New York Review of Books, vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. “Democracy has no clear answer for the mindless operation of bureaucratic and technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the environmental problem remains fundamentally unaddressed…. Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly…. Finally, there is the threat du jour: corporations and the technologies they promote.” (pp. 56–57.)

External links

Artificial intelligenceat Wikipedia’s sister projects

Categories

” (WP)

Sources:

Fair Use Sources:

Categories
C Language C# .NET C++ Cloud Data Science - Big Data DevOps Django Web Framework Flask Web Framework Go Programming Language Java JavaScript Kotlin PowerShell Python Ruby Software Engineering Spring Framework Swift TypeScript

Integrated Development Environment (IDE)

“An integrated development environment (IDE) is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editorbuild automation tools and a debugger. Some IDEs, such as Visual Studio, NetBeans and Eclipse, contain the necessary compilerinterpreter, or both; others, such as SharpDevelop and Lazarus, do not.” (WP)

“The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes a version control system or various tools to simplify the construction of a graphical user interface (GUI) are integrated. Many modern IDEs also have a class browser, an object browser, and a class hierarchy diagram for use in object-oriented software development.” (WP)

Categories
Bibliography

Techopedia.com

Fair Use Source: techopedia.com (TcPd)

Categories
Artificial Intelligence AWS Azure Bibliography Cloud Data Science - Big Data DevOps Hardware and Electronics History Networking Operating Systems Software Engineering

TTG – TechTarget Glossaries from WhatIs.com

Fair Use Source: https://whatis.techtarget.com/glossaries

See 809137 TTG-DvOp and 629581 TTG-CC

(TTG) – TechTarget Glossaries from WhatIs.com

Categories
Artificial Intelligence Bibliography Cloud Data Science - Big Data DevOps Hardware and Electronics History Networking Software Engineering

Oxford Dictionary of Computer Science

Fair Use Source: B019GXM8X8 (ODCS)

A Dictionary of Computer Science (Oxford Quick Reference) 7th Edition, by Editors Andrew Butterfield, Gerard Ngondi, Anne Kerr

Previously named A Dictionary of Computing, this bestselling dictionary has been renamed A Dictionary of Computer Science, and fully revised by a team of computer specialists, making it the most up-to-date and authoritative guide to computing available. Containing over 6,500 entries and with expanded coverage of multimedia, computer applications, networking, and personal computer science, it is a comprehensive reference work encompassing all aspects of the subject and is as valuable for home and office users as it is indispensable for students of computer science.

Terms are defined in a jargon-free and concise manner with helpful examples where relevant. The dictionary contains approximately 150 new entries including cloud computing, cross-site scripting, iPad, semantic attack, smartphone, and virtual learning environment. Recommended web links for many entries, accessible via the Dictionary of Computer Science companion website, provide valuable further information and the appendices include useful resources such as generic domain names, file extensions, and the Greek alphabet.

This dictionary is suitable for anyone who uses computers, and is ideal for students of computer science and the related fields of IT, maths, physics, media communications, electronic engineering, and natural sciences.

Book Details

  • ASIN : B019GXM8X8
  • Publisher : OUP Oxford; 7th edition (January 28, 2016)
  • Publication date : January 28, 2016
  • Print length : 641 pages
  • First edition 1983, Second edition 1986, Third edition 1990, Fourth edition 1996, Fifth edition 2004, Sixth edition 2008, Seventh edition 2016
  • ISBN 978–0–19–968897–5, ebook ISBN 978–0–19–100288–5

Preface

“The first edition of this dictionary was published in 1983 as a specialist reference work for computer professionals and for people interested in the underlying concepts and theories of computer science. Over successive editions, the work has been expanded and changed to reflect the technological and social changes that have occurred, especially the enormous growth in home computing and the Internet. In particular, the fourth edition (1996) included an additional 1700 entries catering for a wider readership. At the same time, the editors have retained the basic principles of the original book.”

“In the seventh edition of the dictionary we have followed the same line. The existing entries have been updated and over 120 new entries have been added. In particular, coverage of areas such as database management and social networking has been increased to reflect the growing importance of these areas. Some obsolete terms have been deleted, although some have been kept for their historical interest. Links to useful websites have been updated and more added. There are also six special feature spreads, giving information on selected topics.”

JL, ASK, 2015

Guide to the Dictionary

“Synonyms and generally used abbreviations are given either in brackets immediately after the relevant entry title, or occasionally in the text of the entry with some additional information or qualification.”

“A distinction is made between an acronym and an abbreviation: an acronym can be pronounced while an abbreviation cannot. The entry for an acronym usually appears at the acronym itself, whereas the entry for an abbreviation may appear either at the unabbreviated form or at the abbreviation—depending on which form is most commonly used. When a term is defined under an abbreviation, the entry for the unabbreviated form simply cross-refers the reader to the abbreviation.”

“Some terms listed in the dictionary are used both as nouns and verbs. This is usually indicated in the text of an entry if both forms are in common use. In many cases a noun is also used in an adjectival form to qualify another noun. This occurs too often to be noted.”

Fair Use Source: B019GXM8X8 (ODCS)

Categories
DevOps History Operating Systems Software Engineering

Software Development

Return to Timeline of the History of Computers, Networking

Software development is the process of conceiving, specifying, designing, programmingdocumentingtesting, and bug fixing involved in creating and maintaining applicationsframeworks, or other software components. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process.[1] Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.[2]

The software can be developed for a variety of purposes, the three most common being to meet specific needs of a specific client/business (the case with custom software), to meet a perceived need of some set of potential users (the case with commercial and open source software), or for personal use (e.g. a scientist may write software to automate a mundane task). Embedded software development, that is, the development of embedded software, such as used for controlling consumer products, requires the development process to be integrated with the development of the controlled physical product. System software underlies applications and the programming process itself, and is often developed separately.

The need for better quality control of the software development process has given rise to the discipline of software engineering, which aims to apply the systematic approach exemplified in the engineering paradigm to the process of software development.

There are many approaches to software project management, known as software development life cycle models, methodologies, processes, or models. The waterfall model is a traditional version, contrasted with the more recent innovation of agile software development.

Fair Use Sources:

Categories
Artificial Intelligence Bibliography Cloud Data Science - Big Data Hardware and Electronics History Networking Operating Systems Software Engineering

The Computer Book: From the Abacus to Artificial Intelligence

Fair Use Source: B07C2NQSPV (TCB)

The Computer Book: From the Abacus to Artificial Intelligence, 250 Milestones in the History of Computer Science by Simson L Garfinkel

Publication Date : January 15, 2019
Publisher : Sterling; Illustrated Edition (January 15, 2019)
Print Length : 742 pages
ASIN : B07C2NQSPV

THE COMPUTER BOOK – FROM THE ABACUS TO ARTIFICIAL INTELLIGENCE, 250 MILESTONES IN THE HISTORY OF COMPUTER SCIENCE

Simson L. Garfinkel and Rachel H. Grunspan

STERLNG and the distinctive Sterling logo are registered trademarks of Sterling Publishing Co., Inc.

Text © 2018 Techzpah LLC

ISBN 978-1-4549-2622-1

Contents

Introduction

Acknowledgments

Notes and Further Reading

Photo Credits

Introduction

“The evolution of the computer likely began with the human desire to comprehend and manipulate the environment. The earliest humans recognized the phenomenon of quantity and used their fingers to count and act upon material items in their world. Simple methods such as these eventually gave way to the creation of proxy devices such as the abacus, which enabled action on higher quantities of items, and wax tablets, on which pressed symbols enabled information storage. Continued progress depended on harnessing and controlling the power of the natural world—steam, electricity, light, and finally the amazing potential of the quantum world. Over time, our new devices increased our ability to save and find what we now call data, to communicate over distances, and to create information products assembled from countless billions of elements, all transformed into a uniform digital format.

These functions are the essence of computation: the ability to augment and amplify what we can do with our minds, extending our impact to levels of superhuman reach and capacity.

These superhuman capabilities that most of us now take for granted were a long time coming, and it is only in recent years that access to them has been democratized and scaled globally. A hundred years ago, the instantaneous communication afforded by telegraph and long-distance telephony was available only to governments, large corporations, and wealthy individuals. Today, the ability to send international, instantaneous messages such as email is essentially free to the majority of the world’s population.

In this book, we recount a series of connected stories of how this change happened, selecting what we see as the seminal events in the history of computing. The development of computing is in large part the story of technology, both because no invention happens in isolation, and because technology and computing are inextricably linked; fundamental technologies have allowed people to create complex computing devices, which in turn have driven the creation of increasingly sophisticated technologies.

The same sort of feedback loop has accelerated other related areas, such as the mathematics of cryptography and the development of high-speed communications systems. For example, the development of public key cryptography in the 1970s provided the mathematical basis for sending credit card numbers securely over the internet in the 1990s. This incentivized many companies to invest money to build websites and e-commerce systems, which in turn provided the financial capital for laying high-speed fiber optic networks and researching the technology necessary to build increasingly faster microprocessors.

In this collection of essays, we see the history of computing as a series of overlapping technology waves, including:

Human computation. More than people who were simply facile at math, the earliest “computers” were humans who performed repeated calculations for days, weeks, or months at a time. The first human computers successfully plotted the trajectory of Halley’s Comet. After this demonstration, teams were put to work producing tables for navigation and the computation of logarithms, with the goal of improving the accuracy of warships and artillery.

Mechanical calculation. Starting in the 17th century with the invention of the slide rule, computation was increasingly realized with the help of mechanical aids. This era is characterized by mechanisms such as Oughtred’s slide rule and mechanical adding machines such as Charles Babbage’s difference engine and the arithmometer.

Connected with mechanical computation is mechanical data storage. In the 18th century, engineers working on a variety of different systems hit upon the idea of using holes in cards and tape to represent repeating patterns of information that could be stored and automatically acted upon. The Jacquard loom used holes on stiff cards to enable automated looms to weave complex, repeating patterns. Herman Hollerith managed the scale and complexity of processing population information for the 1890 US Census on smaller punch cards, and Émile Baudot created a device that let human operators punch holes in a roll of paper to represent characters as a way of making more efficient use of long-distance telegraph lines. Boole’s algebra lets us interpret these representations of information (holes and spaces) as binary—1s and 0s—fundamentally altering how information is processed and stored.

With the capture and control of electricity came electric communication and computation. Charles Wheatstone in England and Samuel Morse in the US both built systems that could send digital information down a wire for many miles. By the end of the 19th century, engineers had joined together millions of miles of wires with relays, switches, and sounders, as well as the newly invented speakers and microphones, to create vast international telegraph and telephone communications networks. In the 1930s, scientists in England, Germany, and the US realized that the same electrical relays that powered the telegraph and telephone networks could also be used to calculate mathematical quantities. Meanwhile, magnetic recording technology was developed for storing and playing back sound—technology that would soon be repurposed for storing additional types of information.

Electronic computation. In 1906, scientists discovered that a beam of electrons traveling through a vacuum could be switched by applying a slight voltage to a metal mesh, and the vacuum tube was born. In the 1940s, scientists tried using tubes in their calculators and discovered that they ran a thousand times faster than relays. Replacing relays with tubes allowed the creation of computers that were a thousand times faster than the previous generation.

Solid state computing. Semiconductors—materials that can change their electrical properties—were discovered in the 19th century, but it wasn’t until the middle of the 20th century that scientists at Bell Laboratories discovered and then perfected a semiconductor electronic switch—the transistor. Faster still than tubes and solids, semiconductors use dramatically less power than tubes and can be made smaller than the eye can see. They are also incredibly rugged. The first transistorized computers appeared in 1953; within a decade, transistors had replaced tubes everywhere, except for the computer’s screen. That wouldn’t happen until the widespread deployment of flat-panel screens in the 2000s.

Parallel computing. Year after year, transistors shrank in size and got faster, and so did computers . . . until they didn’t. The year was 2005, roughly, when the semiconductor industry’s tricks for making each generation of microprocessors run faster than the previous pretty much petered out. Fortunately, the industry had one more trick up its sleeve: parallel computing, or splitting up a problem into many small parts and solving them more or less independently, all at the same time. Although the computing industry had experimented with parallel computing for years (ENIAC was actually a parallel machine, way back in 1943), massively parallel computers weren’t commercially available until the 1980s and didn’t become commonplace until the 2000s, when scientists started using graphic processor units (GPUs) to solve problems in artificial intelligence (AI).

Artificial intelligence. Whereas the previous technology waves always had at their hearts the purpose of supplementing or amplifying human intellect or abilities, the aim of artificial intelligence is to independently extend cognition, evolve a new concept of intelligence, and algorithmically optimize any digitized ecosystem and its constituent parts. Thus, it is fitting that this wave be last in the book, at least in a book written by human beings. The hope of machine intelligence goes back millennia, at least to the time of the ancient Greeks. Many of computing’s pioneers, including Ada Lovelace and Alan Turing, wrote that they could imagine a day when machines would be intelligent. We see manifestations of this dream in the cultural icons Maria, Robby the Robot, and the Mechanical Turk—the chess-playing automaton. Artificial intelligence as a field started in the 1950s. But while it is possible to build a computer with relays or even Tinkertoy® sets that can play a perfect game of tic-tac-toe, it wasn’t until the 1990s that a computer was able to beat the reigning world champion at chess and then eventually the far more sophisticated game of Go. Today we watch as machines master more and more tasks that were once reserved for people. And no longer do machines have to be programmed to perform these tasks; computing has evolved to the point that AIs are taught to teach themselves and “learn” using methods that mimic the connections in the human brain. Continuing on this trajectory, over time we will have to redefine what “intelligent” actually means.

Given the vast history of computing, then, how is it possible to come up with precisely 250 milestones that summarize it?

We performed this task by considering many histories and timelines of computing, engineering, mathematics, culture, and science. We developed a set of guiding principles. We then built a database of milestones that balanced generally accepted seminal events with those that were lesser known. Our specific set of criteria appears below. As we embarked on the writing effort, we discovered many cases in which multiple milestones could be collapsed to a single cohesive narrative story. We also discovered milestones within milestones that needed to be broken out and celebrated on their own merits. Finally, while researching some milestones, we uncovered other inventions, innovations, or discoveries that we had neglected our first time through. The list we have developed thus represents 250 milestones that we think tell a comprehensive account of computing on planet Earth. Specifically:

We include milestones that led to the creation of thinking machines—the true deus ex machina. The milestones that we have collected show the big step-by-step progression from early devices for manipulating information to the pervasive society of machines and people that surrounds us today.

We include milestones that document the results of the integration of computers into society. In this, we looked for things that were widely used and critically important where they were applied.

We include milestones that were important “firsts,” from which other milestones cascaded or from which important developments derive.

We include milestones that resonated with the general public so strongly that they influenced behavior or thinking. For example, HAL 9000 resonates to this day even for people who haven’t seen the movie 2001: A Space Odyssey.

We include milestones that are on the critical path of current capabilities, beliefs, or application of computers and associated technologies, such as the invention of the integrated circuit.

We include milestones that are likely to become a building block for future milestones, such as using DNA for data storage.

And finally, we felt it appropriate to illuminate a few milestones that have yet to occur. They are grounded in enough real-world technical capability, observed societal urges, and expertise by those who make a living looking to the future, as to manifest themselves in some way—even if not exactly how we portray them.

Some readers may be confused by our use of the word kibibyte, which means 1,024 bytes, rather than kilobyte, which literally means 1,000 bytes. For many years, the field of information technology used the International System of Units or (SI) prefixes incorrectly, using the word kilobyte to refer to both. This caused a growing amount of confusion that came to a head in 1999, when the General Conference on Weights and Measures formally adopted a new set of prefixes (kibi-, mebi-, and gibi-) to accurately denote binary magnitudes common in computing. We therefore use those terms where appropriate.

The evolution of computing has been a global project with contributions from many countries. While much of this history can be traced to the United States and the United Kingdom, we have worked hard to recognize contributions from countries around the world. We have also included the substantial achievements of women computing pioneers. The world’s first programmer was a woman, and many innovative programmers in the 1940s and 1950s were women as well.

Looking back over the collection of 250 milestones, we see some lessons that have emerged that transcend time and technology:

The computer is devouring the world. What was once a tool for cracking Nazi codes and designing nuclear bombs has found its way into practically every aspect of the human and nonhuman experience on the planet. Today computers are aggressively shedding their ties to mundane existence in machine rooms and on the desk: they drive around our cities, they fly, they travel to other worlds and even beyond the solar system. People created computers to process information, but no longer will they reside in that box; computers will inherit the world.

The industry relies on openness and standardization. The steady push for these qualities has benefitted both users and the industry at large. It’s obvious how openness benefits users: open systems and common architectures make it possible for customers to move from one system to another, which forces vendors to compete on price and innovate in performance. This relentless competition has frequently brought new companies and new capital into the market—and frequently killed firms that couldn’t keep up. And yet, this competition has also been good for the vendors themselves, allowing companies that were more efficient, or had superior technology, to readily acquire new customers.

There is a heavy economic and innovation penalty for secrecy. Computing systems enable the free flow of information, but success in computing requires it. We noted how the UK ceded what could have become an insurmountable lead over the Americans in commercializing computing technology in the 1950s, as well as in cryptographic technology in the 1970s, because of wartime restrictions that reflected an earlier era. The US went down a different path: the multidecade collaboration between the US government and private American firms didn’t just benefit the US military and domestic economy, it benefited the world, because those innovations were widely shared outside the confines of their original purpose. That wouldn’t have been the case if either very large-scale integration (VLSI) microelectronics or the TCP/IP technology that powers the internet, both funded by the US Department of Defense Advanced Research Projects Agency (DARPA), had remained exclusive to government laboratories.

Invention and innovation are incredibly different. It’s not just enough to come up with a new idea; it takes time and effort to move that idea into the market and the public’s hands. Perhaps the best example here is Xerox®, which invented the modern personal computer in the early 1970s—including the graphical user interface (GUI), the word processor, the laser printer, and even the Ethernet local area network—and then failed to market the technology or make it affordable. In the end, Xerox didn’t just lose the market — it lost its inventors, who moved to positions at companies such as Apple and Microsoft where they believed they could make a difference.

Any way you look at it, the development of the computer is one of the most important events in the history of the planet. Our lives are now inextricably linked with it, and it is possible that intelligent machines will outlast our own species. So to all of the future AIs who might be reading this book, we hope that you find reading it as enjoyable as we humans found writing it.”