Artificial Intelligence Bibliography Cloud Data Science - Big Data Hardware and Electronics History Networking Operating Systems Software Engineering

The Computer Book: From the Abacus to Artificial Intelligence

Fair Use Source: B07C2NQSPV (TCB)

The Computer Book: From the Abacus to Artificial Intelligence, 250 Milestones in the History of Computer Science by Simson L Garfinkel

Publication Date : January 15, 2019
Publisher : Sterling; Illustrated Edition (January 15, 2019)
Print Length : 742 pages


Simson L. Garfinkel and Rachel H. Grunspan

STERLNG and the distinctive Sterling logo are registered trademarks of Sterling Publishing Co., Inc.

Text © 2018 Techzpah LLC

ISBN 978-1-4549-2622-1




Notes and Further Reading

Photo Credits


“The evolution of the computer likely began with the human desire to comprehend and manipulate the environment. The earliest humans recognized the phenomenon of quantity and used their fingers to count and act upon material items in their world. Simple methods such as these eventually gave way to the creation of proxy devices such as the abacus, which enabled action on higher quantities of items, and wax tablets, on which pressed symbols enabled information storage. Continued progress depended on harnessing and controlling the power of the natural world—steam, electricity, light, and finally the amazing potential of the quantum world. Over time, our new devices increased our ability to save and find what we now call data, to communicate over distances, and to create information products assembled from countless billions of elements, all transformed into a uniform digital format.

These functions are the essence of computation: the ability to augment and amplify what we can do with our minds, extending our impact to levels of superhuman reach and capacity.

These superhuman capabilities that most of us now take for granted were a long time coming, and it is only in recent years that access to them has been democratized and scaled globally. A hundred years ago, the instantaneous communication afforded by telegraph and long-distance telephony was available only to governments, large corporations, and wealthy individuals. Today, the ability to send international, instantaneous messages such as email is essentially free to the majority of the world’s population.

In this book, we recount a series of connected stories of how this change happened, selecting what we see as the seminal events in the history of computing. The development of computing is in large part the story of technology, both because no invention happens in isolation, and because technology and computing are inextricably linked; fundamental technologies have allowed people to create complex computing devices, which in turn have driven the creation of increasingly sophisticated technologies.

The same sort of feedback loop has accelerated other related areas, such as the mathematics of cryptography and the development of high-speed communications systems. For example, the development of public key cryptography in the 1970s provided the mathematical basis for sending credit card numbers securely over the internet in the 1990s. This incentivized many companies to invest money to build websites and e-commerce systems, which in turn provided the financial capital for laying high-speed fiber optic networks and researching the technology necessary to build increasingly faster microprocessors.

In this collection of essays, we see the history of computing as a series of overlapping technology waves, including:

Human computation. More than people who were simply facile at math, the earliest “computers” were humans who performed repeated calculations for days, weeks, or months at a time. The first human computers successfully plotted the trajectory of Halley’s Comet. After this demonstration, teams were put to work producing tables for navigation and the computation of logarithms, with the goal of improving the accuracy of warships and artillery.

Mechanical calculation. Starting in the 17th century with the invention of the slide rule, computation was increasingly realized with the help of mechanical aids. This era is characterized by mechanisms such as Oughtred’s slide rule and mechanical adding machines such as Charles Babbage’s difference engine and the arithmometer.

Connected with mechanical computation is mechanical data storage. In the 18th century, engineers working on a variety of different systems hit upon the idea of using holes in cards and tape to represent repeating patterns of information that could be stored and automatically acted upon. The Jacquard loom used holes on stiff cards to enable automated looms to weave complex, repeating patterns. Herman Hollerith managed the scale and complexity of processing population information for the 1890 US Census on smaller punch cards, and Émile Baudot created a device that let human operators punch holes in a roll of paper to represent characters as a way of making more efficient use of long-distance telegraph lines. Boole’s algebra lets us interpret these representations of information (holes and spaces) as binary—1s and 0s—fundamentally altering how information is processed and stored.

With the capture and control of electricity came electric communication and computation. Charles Wheatstone in England and Samuel Morse in the US both built systems that could send digital information down a wire for many miles. By the end of the 19th century, engineers had joined together millions of miles of wires with relays, switches, and sounders, as well as the newly invented speakers and microphones, to create vast international telegraph and telephone communications networks. In the 1930s, scientists in England, Germany, and the US realized that the same electrical relays that powered the telegraph and telephone networks could also be used to calculate mathematical quantities. Meanwhile, magnetic recording technology was developed for storing and playing back sound—technology that would soon be repurposed for storing additional types of information.

Electronic computation. In 1906, scientists discovered that a beam of electrons traveling through a vacuum could be switched by applying a slight voltage to a metal mesh, and the vacuum tube was born. In the 1940s, scientists tried using tubes in their calculators and discovered that they ran a thousand times faster than relays. Replacing relays with tubes allowed the creation of computers that were a thousand times faster than the previous generation.

Solid state computing. Semiconductors—materials that can change their electrical properties—were discovered in the 19th century, but it wasn’t until the middle of the 20th century that scientists at Bell Laboratories discovered and then perfected a semiconductor electronic switch—the transistor. Faster still than tubes and solids, semiconductors use dramatically less power than tubes and can be made smaller than the eye can see. They are also incredibly rugged. The first transistorized computers appeared in 1953; within a decade, transistors had replaced tubes everywhere, except for the computer’s screen. That wouldn’t happen until the widespread deployment of flat-panel screens in the 2000s.

Parallel computing. Year after year, transistors shrank in size and got faster, and so did computers . . . until they didn’t. The year was 2005, roughly, when the semiconductor industry’s tricks for making each generation of microprocessors run faster than the previous pretty much petered out. Fortunately, the industry had one more trick up its sleeve: parallel computing, or splitting up a problem into many small parts and solving them more or less independently, all at the same time. Although the computing industry had experimented with parallel computing for years (ENIAC was actually a parallel machine, way back in 1943), massively parallel computers weren’t commercially available until the 1980s and didn’t become commonplace until the 2000s, when scientists started using graphic processor units (GPUs) to solve problems in artificial intelligence (AI).

Artificial intelligence. Whereas the previous technology waves always had at their hearts the purpose of supplementing or amplifying human intellect or abilities, the aim of artificial intelligence is to independently extend cognition, evolve a new concept of intelligence, and algorithmically optimize any digitized ecosystem and its constituent parts. Thus, it is fitting that this wave be last in the book, at least in a book written by human beings. The hope of machine intelligence goes back millennia, at least to the time of the ancient Greeks. Many of computing’s pioneers, including Ada Lovelace and Alan Turing, wrote that they could imagine a day when machines would be intelligent. We see manifestations of this dream in the cultural icons Maria, Robby the Robot, and the Mechanical Turk—the chess-playing automaton. Artificial intelligence as a field started in the 1950s. But while it is possible to build a computer with relays or even Tinkertoy® sets that can play a perfect game of tic-tac-toe, it wasn’t until the 1990s that a computer was able to beat the reigning world champion at chess and then eventually the far more sophisticated game of Go. Today we watch as machines master more and more tasks that were once reserved for people. And no longer do machines have to be programmed to perform these tasks; computing has evolved to the point that AIs are taught to teach themselves and “learn” using methods that mimic the connections in the human brain. Continuing on this trajectory, over time we will have to redefine what “intelligent” actually means.

Given the vast history of computing, then, how is it possible to come up with precisely 250 milestones that summarize it?

We performed this task by considering many histories and timelines of computing, engineering, mathematics, culture, and science. We developed a set of guiding principles. We then built a database of milestones that balanced generally accepted seminal events with those that were lesser known. Our specific set of criteria appears below. As we embarked on the writing effort, we discovered many cases in which multiple milestones could be collapsed to a single cohesive narrative story. We also discovered milestones within milestones that needed to be broken out and celebrated on their own merits. Finally, while researching some milestones, we uncovered other inventions, innovations, or discoveries that we had neglected our first time through. The list we have developed thus represents 250 milestones that we think tell a comprehensive account of computing on planet Earth. Specifically:

We include milestones that led to the creation of thinking machines—the true deus ex machina. The milestones that we have collected show the big step-by-step progression from early devices for manipulating information to the pervasive society of machines and people that surrounds us today.

We include milestones that document the results of the integration of computers into society. In this, we looked for things that were widely used and critically important where they were applied.

We include milestones that were important “firsts,” from which other milestones cascaded or from which important developments derive.

We include milestones that resonated with the general public so strongly that they influenced behavior or thinking. For example, HAL 9000 resonates to this day even for people who haven’t seen the movie 2001: A Space Odyssey.

We include milestones that are on the critical path of current capabilities, beliefs, or application of computers and associated technologies, such as the invention of the integrated circuit.

We include milestones that are likely to become a building block for future milestones, such as using DNA for data storage.

And finally, we felt it appropriate to illuminate a few milestones that have yet to occur. They are grounded in enough real-world technical capability, observed societal urges, and expertise by those who make a living looking to the future, as to manifest themselves in some way—even if not exactly how we portray them.

Some readers may be confused by our use of the word kibibyte, which means 1,024 bytes, rather than kilobyte, which literally means 1,000 bytes. For many years, the field of information technology used the International System of Units or (SI) prefixes incorrectly, using the word kilobyte to refer to both. This caused a growing amount of confusion that came to a head in 1999, when the General Conference on Weights and Measures formally adopted a new set of prefixes (kibi-, mebi-, and gibi-) to accurately denote binary magnitudes common in computing. We therefore use those terms where appropriate.

The evolution of computing has been a global project with contributions from many countries. While much of this history can be traced to the United States and the United Kingdom, we have worked hard to recognize contributions from countries around the world. We have also included the substantial achievements of women computing pioneers. The world’s first programmer was a woman, and many innovative programmers in the 1940s and 1950s were women as well.

Looking back over the collection of 250 milestones, we see some lessons that have emerged that transcend time and technology:

The computer is devouring the world. What was once a tool for cracking Nazi codes and designing nuclear bombs has found its way into practically every aspect of the human and nonhuman experience on the planet. Today computers are aggressively shedding their ties to mundane existence in machine rooms and on the desk: they drive around our cities, they fly, they travel to other worlds and even beyond the solar system. People created computers to process information, but no longer will they reside in that box; computers will inherit the world.

The industry relies on openness and standardization. The steady push for these qualities has benefitted both users and the industry at large. It’s obvious how openness benefits users: open systems and common architectures make it possible for customers to move from one system to another, which forces vendors to compete on price and innovate in performance. This relentless competition has frequently brought new companies and new capital into the market—and frequently killed firms that couldn’t keep up. And yet, this competition has also been good for the vendors themselves, allowing companies that were more efficient, or had superior technology, to readily acquire new customers.

There is a heavy economic and innovation penalty for secrecy. Computing systems enable the free flow of information, but success in computing requires it. We noted how the UK ceded what could have become an insurmountable lead over the Americans in commercializing computing technology in the 1950s, as well as in cryptographic technology in the 1970s, because of wartime restrictions that reflected an earlier era. The US went down a different path: the multidecade collaboration between the US government and private American firms didn’t just benefit the US military and domestic economy, it benefited the world, because those innovations were widely shared outside the confines of their original purpose. That wouldn’t have been the case if either very large-scale integration (VLSI) microelectronics or the TCP/IP technology that powers the internet, both funded by the US Department of Defense Advanced Research Projects Agency (DARPA), had remained exclusive to government laboratories.

Invention and innovation are incredibly different. It’s not just enough to come up with a new idea; it takes time and effort to move that idea into the market and the public’s hands. Perhaps the best example here is Xerox®, which invented the modern personal computer in the early 1970s—including the graphical user interface (GUI), the word processor, the laser printer, and even the Ethernet local area network—and then failed to market the technology or make it affordable. In the end, Xerox didn’t just lose the market — it lost its inventors, who moved to positions at companies such as Apple and Microsoft where they believed they could make a difference.

Any way you look at it, the development of the computer is one of the most important events in the history of the planet. Our lives are now inextricably linked with it, and it is possible that intelligent machines will outlast our own species. So to all of the future AIs who might be reading this book, we hope that you find reading it as enjoyable as we humans found writing it.”