“A limited hangout or partial hangout is, according to former special assistant to the Deputy Director of the Central Intelligence AgencyVictor Marchetti, “spy jargon for a favorite and frequently used gimmick of the clandestine professionals. When their veil of secrecy is shredded and they can no longer rely on a phony cover story to misinform the public, they resort to admitting—sometimes even volunteering—some of the truth while still managing to withhold the key and damaging facts in the case. The public, however, is usually so intrigued by the new information that it never thinks to pursue the matter further.”[1][2] (WP)
The phrase was coined in the following exchange:[5]” (WP)
PRESIDENT: You think, you think we want to, want to go this route now? And the — let it hang out, so to speak?
DEAN: Well, it’s, it isn’t really that — HALDEMAN: It’s a limited hang out. DEAN: It’s a limited hang out. EHRLICHMAN: It’s a modified limited hang out.
PRESIDENT: Well, it’s only the questions of the thing hanging out publicly or privately.
“Before this exchange, the discussion captures Nixon outlining to Dean the content of a report that Dean would create, laying out a misleading view of the role of the White House staff in events surrounding the Watergate burglary. In Ehrlichman’s words: “And the report says, ‘Nobody was involved,'”. The document would then be shared with the United States Senate Watergate Committee investigating the affair. The report would serve the administration’s goals by protecting the President, providing documentary support for his false statements should information come to light that contradicted his stated position. Further, the group discusses having information on the report leaked by those on the Committee sympathetic to the President, to put exculpatory information into the public sphere.[5]” (WP)
“The phrase has been cited as a summation of the strategy of mixing partial admissions with misinformation and resistance to further investigation, and is used in political commentary to accuse people or groups of following a Nixon-like strategy.[6]” (WP) However, this “strategy” has been used since time immemorial.
“macOS is the direct successor to the classic Mac OS, the line of Macintosh operating systems with nine releases from 1984 to 1999. macOS adopted the Unix kernel and inherited technologies developed between 1985 and 1997 at NeXT, the company that Apple co-founder Steve Jobs created after leaving Apple in 1985. Releases from Mac OS X 10.5 Leopard[11] and thereafter are UNIX 03 certified.[12] Apple’s mobile operating system, iOS, has been considered a variant of macOS.[13]” (WP)
“The first desktop version, Mac OS X 10.0, was released in March 2001, with its first update, 10.1, arriving later that year. The “X” in Mac OS X and OS X is the Roman numeral for the number 10 and is pronounced as such. The X was a prominent part of the operating system’s brand identity and marketing in its early years, but gradually receded in prominence since the release of Snow Leopard in 2009. Apple began naming its releases after big cats, which lasted until OS X 10.8 Mountain Lion. Since OS X 10.9 Mavericks, releases have been named after locations in California.[14] Apple shortened the name to “OS X” in 2012 and then changed it to “macOS” in 2016, adopting the nomenclature that they were using for their other operating systems, iOS, watchOS, and tvOS. With Big Sur, Apple advanced the macOS major version number for the first time, changing it to 11 for Big Sur from the 10 used for all previous releases.” (WP)
1909 – Charles Herrold in San Jose started first radio station in USA with regularly scheduled programming, including songs, using an arc transmitter of his own design. Herrold was one of Stanford’s earliest students and founded his own College of Wireless and Engineering in San Jose
1935 – Fujitsu founded as Fuji Telecommunications Equipment Manufacturing in Japan. Fujitsu is the second oldest IT company after IBM and before Hewlett-Packard
1961 – Shinshu Seiki Company founded in Japan (now called Seiko Epson Corporation) as a subsidiary of Seiko to supply precision parts for Seiko watches.
1968 – Dot Matrix Printer – Shinshu Seiki (now called Seiko Epson Corporation) launched the world’s first mini-printer, the EP-101 (“EP” for Electronic Printer,) which was soon incorporated into many calculators
1973 – Danny Cohen first demonstrated a form of packet voice as part of a flight simulator application, which operated across the early ARPANET.[69][70]
1973 – Xerox Alto from Xerox Palo Alto Research Center (PARC)
1975 – The name Epson was coined for the next generation of printers based on the EP-101 which was released to the public. (EPSON:E-P-SON: SON of Electronic Printer).[7]Epson America Inc. was established to sell printers for Shinshu Seiki Co.
1977 – Danny Cohen and Jon Postel of the USC Information Sciences Institute, and Vint Cerf of the Defense Advanced Research Projects Agency (DARPA), agree to separate IP from TCP, and create UDP for carrying real-time traffic.
“Rarely do consumers line up two days before the release of a product—armed with sleeping bags and changes of clothes—to make sure they can buy it. But that is exactly what preceded the launch of the Apple iPhone on June 29, 2007.
The iPhone’s design and functionality changed the entire smartphone concept by bundling together capabilities that had never been married before: telephony, messaging, internet access, music, a vibrant color screen, and an intuitive, touch-based interface. Without the physical buttons that were common on other smartphones at the time, the entire surface was available for presenting information. The keyboard appeared only when needed—and it was much easier to type accurately, thanks to behind-the-scenes AI that invisibly adjusted the sensitive area around each key in response to what letters the user was forecast to press next.
The following year, Apple introduced its next big thing: specialized programs called apps, downloadable over the air. The original iPhone shipped with a few built-in apps and a web browser. Apple CEO Steve Jobs had envisioned that only third-party developers would be able to write web apps. Early adopters, however, started overcoming Apple’s security mechanisms by “jailbreaking” their phones and installing their own native apps. Jobs realized that if users were that determined to run native apps, Apple might as well supply the content and make a profit.
The Apple iTunes App Store opened in 2008 with 500 apps. Suddenly that piece of electronics in your pocket was more than a phone to make calls or check email—it became a super gadget, able to play games, manipulate photographs, track your workout, and much more. In October 2013, Apple announced that there were a million apps available for the iPhone, many of them realizing new location-based services, such as ride-sharing, dating, and localized restaurant reviews, to name a few.
While the iPhone has largely been celebrated, it has also been accused of ushering in the era of “smartphone addiction,” with the average person, according to a 2016 study, now checking his or her smartphone 2,617 times a day. Since the original release in 2007, more than 1 billion iPhones have been sold worldwide, and it still holds the record for taking only three months to get to 1 million units sold.”
9to5 Staff. “Jobs’ Original Vision for the iPhone: No Third-Party Native Apps.” 9To5 Mac (website), October 21, 2011. https://9to5mac.com/2011/10/21/jobs-original-vision-for-the-iphone-no-third-party-native-apps/.
Steve Jobs (1955–2011), Jeff Robbin (dates unavailable), Bill Kincaid (b. 1956), Dave Heller (dates unavailable)
“The music business at the end of the 20th century was in an epic fight to maintain its profitable business model. Music had become 1s and 0s and was being widely shared, without compensation, among users through online services such as Napster. The industry was filing suit against both the services and their users to protect copyrights.
Apple cofounder Steve Jobs saw an opportunity and in 2000 purchased SoundJam MP, a program that functioned as a music content manager and player. It was developed by two former Apple software engineers, Bill Kincaid and Jeff Robbin, along with Dave Heller, who all took up residence at Apple and evolved the product into what would become iTunes.
iTunes debuted on January 9, 2001, at Macworld Expo in San Francisco. For the first two years, iTunes was promoted as a software jukebox that offered a simple interface to organize MP3s and convert CDs into compressed audio formats. In October 2001, Apple released a digital audio player, the iPod, which would neatly sync with a user’s iTunes library over a wire. This hardware release set the stage for the next big evolution, which came with iTunes version 4 in 2003—the iTunes Music Store, which launched with 200,000 songs. Now users could buy licensed, high-quality digital music from Apple.
Buying music from a computer company was a radical concept. It flipped the traditional business model and gave the music industry an organized, legitimate mechanism in the digital space to profit from, and protect, their intellectual property.
The music labels agreed to participate in the iTunes model and allowed Jobs to sell their inventory in part because he agreed to copy-protect their songs with Digital Rights Management (DRM). (Apple significantly eased the DRM-based restrictions for music in 2009.) Consumers embraced iTunes in part because they could buy single songs again—no longer did they have to purchase an entire album to get one or two tracks.
In the following years, iTunes would snowball into a media juggernaut adding music videos, movies, television shows, audio books, podcasts, radio, and music streaming—all of which were integrated with new products and services from Apple, including Apple TV, the iPhone, and the iPad.”
SEE ALSO MPEG (1988), Diamond Rio MP3 Player (1998)
Apple’s Steve Jobs announces the release of new upgrades to iTunes and other Apple products at a press conference in San Francisco, California, on September 1, 2010.
Created by Apple and released on June 2, 2014, the Swift programming language helps create programs and apps for iOS, macOS, the Apple Watch, and AppleTV.
Engineers at Apple developed the Dylan programming language in the early 1990s. Dylan was designed to resemble the syntax of the ALGOL programming language.
Steve Jobs (1955–2011), Steve Wozniak (b. 1950), Randy Wigginton (dates unavailable)
The 1977 Apple II, shown here with two Disk II floppy disk drives and a 1980s-era Apple Monitor II.
The Apple II series (trademarked with square brackets as “Apple ][” and rendered on later models as “Apple //”) is a family of home computers, one of the first highly successful mass-produced microcomputer products,[1] designed primarily by Steve Wozniak, manufactured by Apple Computer (now Apple Inc.), and launched in 1977 with the original Apple II.
Apple II in common 1977 configuration with 9″ monochrome monitor, game paddles, and Red Book recommended Panasonic RQ-309DS cassette deck
“If the Altair 8800 was the machine that put computers in the hands of individual hobbyists, then the Apple II was the machine that put computers in the hands of everyday people. Steve Wozniak, the lead designer, and Randy Wigginton, the programmer, demonstrated the first prototype at the legendary Homebrew Computer Club in December 1976 and, along with Steve Jobs, the team’s financial wizard and chief promoter, introduced it to the public in April 1977 at the West Coast Computer Faire. The Apple II was the first successful mass-produced personal computer.” (B07C2NQSPV)
“The Apple II was based on the Apple I, which Wozniak designed and built himself. The Apple I was sold as a single-board computer: purchasers needed to supply their own keyboard, monitor—or a television and a radio frequency (RF) modulator—and a case. The Apple II, in contrast, came with keyboard and case, although it still needed an RF modulator to display on a TV.” (B07C2NQSPV)
“The Apple II was widely popular with techies, schools, and the general consumer.” (B07C2NQSPV)
“It offered BASIC in ROM, so users could start to write and run programs as soon the machine powered on. It came with a reliable audiocassette interface, making it easy to save and load programs on a low-cost, consumer-grade cassette deck. It even had color text, a first for the industry.” (B07C2NQSPV)
“In 1978, Apple introduced a low-cost 5¼-inch external floppy drive, which used software and innovative circuit design to eliminate electronic components. Faster and more reliable than the cassette, and capable of random access, the disk turned the Apple II from a curiosity into a serious tool for education and business. Then in 1979, VisiCalc, the first personal spreadsheet program, was introduced. Designed specifically to run on the Apple II, VisiCalc helped drive new sales of the computer.” (B07C2NQSPV)
“The Apple II was a runaway success, with Apple’s revenues growing from $775,000 to $118 million from September 1977 through September 1980. Apple ultimately released seven major versions of the Apple II. Between 5 million and 6 million computers would ultimately be sold.” (B07C2NQSPV)
“The evolution of the computer likely began with the human desire to comprehend and manipulate the environment. The earliest humans recognized the phenomenon of quantity and used their fingers to count and act upon material items in their world. Simple methods such as these eventually gave way to the creation of proxy devices such as the abacus, which enabled action on higher quantities of items, and wax tablets, on which pressed symbols enabled information storage. Continued progress depended on harnessing and controlling the power of the natural world—steam, electricity, light, and finally the amazing potential of the quantum world. Over time, our new devices increased our ability to save and find what we now call data, to communicate over distances, and to create information products assembled from countless billions of elements, all transformed into a uniform digital format.
These functions are the essence of computation: the ability to augment and amplify what we can do with our minds, extending our impact to levels of superhuman reach and capacity.
These superhuman capabilities that most of us now take for granted were a long time coming, and it is only in recent years that access to them has been democratized and scaled globally. A hundred years ago, the instantaneous communication afforded by telegraph and long-distance telephony was available only to governments, large corporations, and wealthy individuals. Today, the ability to send international, instantaneous messages such as email is essentially free to the majority of the world’s population.
In this book, we recount a series of connected stories of how this change happened, selecting what we see as the seminal events in the history of computing. The development of computing is in large part the story of technology, both because no invention happens in isolation, and because technology and computing are inextricably linked; fundamental technologies have allowed people to create complex computing devices, which in turn have driven the creation of increasingly sophisticated technologies.
The same sort of feedback loop has accelerated other related areas, such as the mathematics of cryptography and the development of high-speed communications systems. For example, the development of public key cryptography in the 1970s provided the mathematical basis for sending credit card numbers securely over the internet in the 1990s. This incentivized many companies to invest money to build websites and e-commerce systems, which in turn provided the financial capital for laying high-speed fiber optic networks and researching the technology necessary to build increasingly faster microprocessors.
In this collection of essays, we see the history of computing as a series of overlapping technology waves, including:
Human computation. More than people who were simply facile at math, the earliest “computers” were humans who performed repeated calculations for days, weeks, or months at a time. The first human computers successfully plotted the trajectory of Halley’s Comet. After this demonstration, teams were put to work producing tables for navigation and the computation of logarithms, with the goal of improving the accuracy of warships and artillery.
Mechanical calculation. Starting in the 17th century with the invention of the slide rule, computation was increasingly realized with the help of mechanical aids. This era is characterized by mechanisms such as Oughtred’s slide rule and mechanical adding machines such as Charles Babbage’s difference engine and the arithmometer.
Connected with mechanical computation is mechanical data storage. In the 18th century, engineers working on a variety of different systems hit upon the idea of using holes in cards and tape to represent repeating patterns of information that could be stored and automatically acted upon. The Jacquard loom used holes on stiff cards to enable automated looms to weave complex, repeating patterns. Herman Hollerith managed the scale and complexity of processing population information for the 1890 US Census on smaller punch cards, and Émile Baudot created a device that let human operators punch holes in a roll of paper to represent characters as a way of making more efficient use of long-distance telegraph lines. Boole’s algebra lets us interpret these representations of information (holes and spaces) as binary—1s and 0s—fundamentally altering how information is processed and stored.
With the capture and control of electricity came electric communication and computation. Charles Wheatstone in England and Samuel Morse in the US both built systems that could send digital information down a wire for many miles. By the end of the 19th century, engineers had joined together millions of miles of wires with relays, switches, and sounders, as well as the newly invented speakers and microphones, to create vast international telegraph and telephone communications networks. In the 1930s, scientists in England, Germany, and the US realized that the same electrical relays that powered the telegraph and telephone networks could also be used to calculate mathematical quantities. Meanwhile, magnetic recording technology was developed for storing and playing back sound—technology that would soon be repurposed for storing additional types of information.
Electronic computation. In 1906, scientists discovered that a beam of electrons traveling through a vacuum could be switched by applying a slight voltage to a metal mesh, and the vacuum tube was born. In the 1940s, scientists tried using tubes in their calculators and discovered that they ran a thousand times faster than relays. Replacing relays with tubes allowed the creation of computers that were a thousand times faster than the previous generation.
Solid state computing. Semiconductors—materials that can change their electrical properties—were discovered in the 19th century, but it wasn’t until the middle of the 20th century that scientists at Bell Laboratories discovered and then perfected a semiconductor electronic switch—the transistor. Faster still than tubes and solids, semiconductors use dramatically less power than tubes and can be made smaller than the eye can see. They are also incredibly rugged. The first transistorized computers appeared in 1953; within a decade, transistors had replaced tubes everywhere, except for the computer’s screen. That wouldn’t happen until the widespread deployment of flat-panel screens in the 2000s.
Parallel computing. Year after year, transistors shrank in size and got faster, and so did computers . . . until they didn’t. The year was 2005, roughly, when the semiconductor industry’s tricks for making each generation of microprocessors run faster than the previous pretty much petered out. Fortunately, the industry had one more trick up its sleeve: parallel computing, or splitting up a problem into many small parts and solving them more or less independently, all at the same time. Although the computing industry had experimented with parallel computing for years (ENIAC was actually a parallel machine, way back in 1943), massively parallel computers weren’t commercially available until the 1980s and didn’t become commonplace until the 2000s, when scientists started using graphic processor units (GPUs) to solve problems in artificial intelligence (AI).
Artificial intelligence. Whereas the previous technology waves always had at their hearts the purpose of supplementing or amplifying human intellect or abilities, the aim of artificial intelligence is to independently extend cognition, evolve a new concept of intelligence, and algorithmically optimize any digitized ecosystem and its constituent parts. Thus, it is fitting that this wave be last in the book, at least in a book written by human beings. The hope of machine intelligence goes back millennia, at least to the time of the ancient Greeks. Many of computing’s pioneers, including Ada Lovelace and Alan Turing, wrote that they could imagine a day when machines would be intelligent. We see manifestations of this dream in the cultural icons Maria, Robby the Robot, and the Mechanical Turk—the chess-playing automaton. Artificial intelligence as a field started in the 1950s. But while it is possible to build a computer with relays or even Tinkertoy® sets that can play a perfect game of tic-tac-toe, it wasn’t until the 1990s that a computer was able to beat the reigning world champion at chess and then eventually the far more sophisticated game of Go. Today we watch as machines master more and more tasks that were once reserved for people. And no longer do machines have to be programmed to perform these tasks; computing has evolved to the point that AIs are taught to teach themselves and “learn” using methods that mimic the connections in the human brain. Continuing on this trajectory, over time we will have to redefine what “intelligent” actually means.
Given the vast history of computing, then, how is it possible to come up with precisely 250 milestones that summarize it?
We performed this task by considering many histories and timelines of computing, engineering, mathematics, culture, and science. We developed a set of guiding principles. We then built a database of milestones that balanced generally accepted seminal events with those that were lesser known. Our specific set of criteria appears below. As we embarked on the writing effort, we discovered many cases in which multiple milestones could be collapsed to a single cohesive narrative story. We also discovered milestones within milestones that needed to be broken out and celebrated on their own merits. Finally, while researching some milestones, we uncovered other inventions, innovations, or discoveries that we had neglected our first time through. The list we have developed thus represents 250 milestones that we think tell a comprehensive account of computing on planet Earth. Specifically:
We include milestones that led to the creation of thinking machines—the true deus ex machina. The milestones that we have collected show the big step-by-step progression from early devices for manipulating information to the pervasive society of machines and people that surrounds us today.
We include milestones that document the results of the integration of computers into society. In this, we looked for things that were widely used and critically important where they were applied.
We include milestones that were important “firsts,” from which other milestones cascaded or from which important developments derive.
We include milestones that resonated with the general public so strongly that they influenced behavior or thinking. For example, HAL 9000 resonates to this day even for people who haven’t seen the movie 2001: A Space Odyssey.
We include milestones that are on the critical path of current capabilities, beliefs, or application of computers and associated technologies, such as the invention of the integrated circuit.
We include milestones that are likely to become a building block for future milestones, such as using DNA for data storage.
And finally, we felt it appropriate to illuminate a few milestones that have yet to occur. They are grounded in enough real-world technical capability, observed societal urges, and expertise by those who make a living looking to the future, as to manifest themselves in some way—even if not exactly how we portray them.
Some readers may be confused by our use of the word kibibyte, which means 1,024 bytes, rather than kilobyte, which literally means 1,000 bytes. For many years, the field of information technology used the International System of Units or (SI) prefixes incorrectly, using the word kilobyte to refer to both. This caused a growing amount of confusion that came to a head in 1999, when the General Conference on Weights and Measures formally adopted a new set of prefixes (kibi-, mebi-, and gibi-) to accurately denote binary magnitudes common in computing. We therefore use those terms where appropriate.
The evolution of computing has been a global project with contributions from many countries. While much of this history can be traced to the United States and the United Kingdom, we have worked hard to recognize contributions from countries around the world. We have also included the substantial achievements of women computing pioneers. The world’s first programmer was a woman, and many innovative programmers in the 1940s and 1950s were women as well.
Looking back over the collection of 250 milestones, we see some lessons that have emerged that transcend time and technology:
The computer is devouring the world. What was once a tool for cracking Nazi codes and designing nuclear bombs has found its way into practically every aspect of the human and nonhuman experience on the planet. Today computers are aggressively shedding their ties to mundane existence in machine rooms and on the desk: they drive around our cities, they fly, they travel to other worlds and even beyond the solar system. People created computers to process information, but no longer will they reside in that box; computers will inherit the world.
The industry relies on openness and standardization. The steady push for these qualities has benefitted both users and the industry at large. It’s obvious how openness benefits users: open systems and common architectures make it possible for customers to move from one system to another, which forces vendors to compete on price and innovate in performance. This relentless competition has frequently brought new companies and new capital into the market—and frequently killed firms that couldn’t keep up. And yet, this competition has also been good for the vendors themselves, allowing companies that were more efficient, or had superior technology, to readily acquire new customers.
There is a heavy economic and innovation penalty for secrecy. Computing systems enable the free flow of information, but success in computing requires it. We noted how the UK ceded what could have become an insurmountable lead over the Americans in commercializing computing technology in the 1950s, as well as in cryptographic technology in the 1970s, because of wartime restrictions that reflected an earlier era. The US went down a different path: the multidecade collaboration between the US government and private American firms didn’t just benefit the US military and domestic economy, it benefited the world, because those innovations were widely shared outside the confines of their original purpose. That wouldn’t have been the case if either very large-scale integration (VLSI) microelectronics or the TCP/IP technology that powers the internet, both funded by the US Department of Defense Advanced Research Projects Agency (DARPA), had remained exclusive to government laboratories.
Invention and innovation are incredibly different. It’s not just enough to come up with a new idea; it takes time and effort to move that idea into the market and the public’s hands. Perhaps the best example here is Xerox®, which invented the modern personal computer in the early 1970s—including the graphical user interface (GUI), the word processor, the laser printer, and even the Ethernet local area network—and then failed to market the technology or make it affordable. In the end, Xerox didn’t just lose the market — it lost its inventors, who moved to positions at companies such as Apple and Microsoft where they believed they could make a difference.
Any way you look at it, the development of the computer is one of the most important events in the history of the planet. Our lives are now inextricably linked with it, and it is possible that intelligent machines will outlast our own species. So to all of the future AIs who might be reading this book, we hope that you find reading it as enjoyable as we humans found writing it.”