Data Science - Big Data History

Fair Credit Reporting Act – 1970 AD

Return to Timeline of the History of Computers


Fair Credit Reporting Act

Alan Westin (1929–2013)

“In March 1970, a (“limited hangout“) professor from Columbia University testified before the US Congress about shadowy American businesses that were maintaining secret databases on American citizens. These files, said Alan Westin, “may include ‘facts, statistics, inaccuracies and rumors’ . . . about virtually every phase of a person’s life: his marital troubles, jobs, school history, childhood, sex life, and political activities.”

The files were used by American banks, department stores, and other firms to determine who should be given credit to buy a house, a car, or even a furniture set. The databanks, Westin explained, were also used by companies evaluating job applicants and underwriting insurance. And they couldn’t be outlawed: without credit and the ability to pay for major purchases with installments, many people couldn’t otherwise afford such things.

Westin was well known to the US Congress: he had testified on multiple occasions before congressional committees investigating the credit-reporting industry, and he had published a book, Privacy and Freedom (1967), in which he argued that freedom in the information age required that individuals have control over how their data are used by governments and businesses. Westin defined privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” And he coined the phrase data shadow to describe the trail of information that people leave behind in the modern world.

On October 26, 1970, Congress enacted the Fair Credit Reporting Act (FCRA), which gave Americans, for the first time, the right to see the consumer files that businesses used to decide who should get credit and insurance. The FCRA also gave consumers the right to force the credit bureaus to investigate a claim that the consumer felt was inaccurate, and the ability to insert a statement in the file, telling his or her side of the story.

The FCRA was one of the first laws in the world regulating what private businesses could do with data that they collect—the beginning of what is now called data protection, an idea that eventually spread worldwide.

Today there are privacy commissioners in almost every developed country. The passage of the European Union’s General Data Protection Regulation (GDPR) marked the most far-reaching privacy law on the planet.”

SEE ALSO Relational Database (1970)

Columbia professor Alan Westin was concerned about American businesses keeping secret databases on American citizens.

Fair Use Source: B07C2NQSPV

Cloud Software Engineering


Software is a collection of instructions and data that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, including programs and data. Computer software includes computer programslibraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.” (WP)

“At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example displaying some text on a computer screen; causing state changes which should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to “jump” to a different instruction, or is interrupted by the operating system. As of 2015, most personal computerssmartphone devices and servers have processors with multiple execution units or multiple processors performing computation together, and computing has become a much more concurrent activity than in the past.” (WP)

“The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages.[1] High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two. Software may also be written in a low-level assembly language, which has strong correspondence to the computer’s machine language instructions and is translated into machine language using an assembler.” (WP)


Fair Use Sources:

Cloud Software Engineering


Copyright is a type of intellectual property that gives its owner the exclusive right to make copies of a creative work, usually for a limited time.[1][2][3][4][5] The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself.[6][7][8] A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.” (WP)

“Some jurisdictions require “fixing” copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders.[9][10][11][12][better source needed] These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.[13]” (WP)

“Copyrights can be granted by public law and are in that case considered “territorial rights”. This means that copyrights granted by the law of a certain state, do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works “cross” national borders or national rights are inconsistent.[14]” (WP)

“Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities[5] to establishing copyright, others recognize copyright in any completed work, without a formal registration. In general, many believe that the long copyright duration guarantees the better protection of works.” (WP)


Fair Use Sources:


PalmPilot – 1997 AD

Return to Timeline of the History of Computers



Jeff Hawkins (b. 1957)

“To design the PalmPilot, Jeff Hawkins cut a block of wood that would fit in a man’s shirt pocket and carried it around for several months, pretending to use it to look up phone numbers, check his schedule, and put things on his to-do list. It was a pure user-centered design, unencumbered by what technology could produce.

Based on his experience building two previous portable computers, and the practice of pretending to use a little wooden block in his pocket, Hawkins realized that a portable computer didn’t need to replace a traditional desktop; it just needed to fill in the gaps. Specifically, the portable computer needed to instantly turn on and let users find the information they were looking for — a person’s name or address, for example — or to access a calendar. There was limited need for data input — more important was some way to rapidly synchronize the portable’s database with the desktop’s.

Because its function was not text entry, there was no need for a keyboard. Instead, there was a small rectangular area at the bottom of the touchscreen where users could enter letters in a stylized alphabet he called Graffiti. Similar to traditional Roman characters, Graffiti characters were easier for the device’s software to recognize.

It took a team of 27 people just 18 months to develop the product. But with no money to manufacture or market the device, in 1995 Palm Computing was sold to U.S. Robotics Corporation (USR), a modem manufacturer. Two years later, in 1997, USR brought the PalmPilot to market, selling it for a list price of $299. The Palm was a breakthrough. More than 2 million units were sold in just two years; more than 20 million would be sold by 2003.”

SEE ALSO: Touchscreen (1965), Apple Newton (1993)

The PalmPilot made it easy for users to have instant access to important information, such as calendar items or a persons address.

Fair Use Sources: B07C2NQSPV

Artificial Intelligence Cloud Data Science - Big Data History Software Engineering

The Limits of Computation? – ~9999 AD

Return to Timeline of the History of Computers


The Limits of Computation?

Seth Lloyd (b. 1960)

“Each generation of technology has seen faster computations, larger storage systems, and improved communications bandwidth. Nevertheless, physics may impose fundamental limits on computing systems that cannot be overcome. The most obvious limit is the speed of light: a computer in New York City will never be able to request a web page from a server in London and download the results with a latency of less than 0.01 seconds, because light takes 0.0186 seconds to travel the 5,585 kilometers each direction, consistent with Einstein’s Theory of Special Relativity. On the other hand, recently some scientists have claimed that they can send information without sending light particles by using quantum entanglement, something Einstein dismissively called spooky action at a distance. Indeed, in 2013, scientists in China measured the speed of information propagation due to quantum entanglement and found that it was at least 10,000 times faster than the speed of light.

Computation itself may also have a fundamental limit, according to Seth Lloyd, a professor of mechanical engineering and physics at MIT. In 2000, Lloyd showed that the ultimate speed of a computer was limited by the energy that it had available for calculations. Assuming that the computations would be performed at the scale of individual atoms, a central processor of 1 kilogram occupying the volume of 1 liter has a maximum speed of 5.4258 × 1050 operations per second—roughly 1041, or a billion billion billion billion times faster than today’s laptops.

Such speeds may seem unfathomable today, but Lloyd notes that if computers double in speed every two years, then this is only 250 years of technological progress. Lloyd thinks that such technological progress is unlikely. On the other hand, in 1767, the fastest computers were humans.

Because AI is increasingly able to teach and train itself across all technological and scientific domains—doing so at an exponential rate while sucking in staggering amounts of data from an increasingly networked and instrumented world—perhaps it is appropriate that a question mark be the closing punctuation for the title of this entry.”

SEE ALSO Sumerian Abacus (c. 2500 BCE), Slide Rule (1621), The Difference Engine (1822), ENIAC (1943), Quantum Cryptography (1984)

Based on our current understanding of theoretical physics, a computer operating at the maximum speed possible would not be physically recognizable by today’s standards. It would probably appear as a sphere of highly organized mass and energy.

Fair Use Sources: B07C2NQSPV

Lloyd, Seth. “Ultimate Physical Limits to Computation.” Nature 406, no. 8 (August 2000): 1047–54.

Yin, Juan, et al. “Bounding the Speed of ‘Spooky Action at a Distance.’” Physical Review Letters 110, no. 26 (2013).

Artificial Intelligence Cloud History Software Engineering

Artificial General Intelligence (AGI)

Return to Timeline of the History of Computers


Artificial General Intelligence (AGI)

“The definition and metric that determines whether computers have achieved human intelligence is controversial among the AI community. Gone is the reliance on the Turing test — programs can pass the test today, and they are clearly not intelligent.

So how can we determine the presence of true intelligence? Some measure it against the ability to perform complex intellectual tasks, such as carrying out surgery or writing a best-selling novel. These tasks require an extraordinary command of natural language and, in some cases, manual dexterity. But none of these tasks require that computers be sentient or have sapience—the capacity to experience wisdom. Put another way, would human intelligence be met only if a computer could perform a task such as carrying out a conversation with a distraught individual and communicating warmth, empathy, and loving behavior—and then in turn receive feedback from the individual that stimulates those feelings within the computer as well? Is it necessary to experience emotions, rather than simulate the experience of emotions? There is no correct answer to this, nor is there a fixed definition of what constitutes “intelligence.”

The year chosen for this entry is based upon broad consensus among experts that, by 2050, many complex human tasks that do not require cognition and self-awareness in the traditional biochemical sense will have been achieved by AI. Artificial general intelligence (AGI) comes next. AGI is the term often ascribed to the state in which computers can reason and solve problems like humans do, adapting and reflecting upon decisions and potential decisions in navigating the world—kind of like how humans rely on common sense and intuition. “Narrow AI,” or “weak AI,” which we have today, is understood as computers meeting or exceeding human performance in speed, scale, and optimization in specific tasks, such as high-volume investing, traffic coordination, diagnosing disease, and playing chess, but without the cognition and emotional intelligence.

The year 2050 is based upon the expected realization of certain advances in hardware and software capacity necessary to perform computationally intense tasks as the measure of AGI. Limitations in progress thus far are also a result of limited knowledge about how the human brain functions, where thought comes from, and the role that the physical body and chemical feedback loops play in the output of what the human brain can do.”

SEE ALSO: The “Mechanical Turk” (1770), The Turing Test (1951)

Artificial general intelligence refers to the ability of computers to reason and solve problems like humans do, in a way that’s similar to how humans rely on common sense and intuition.

Fair Use Sources: B07C2NQSPV

Artificial Intelligence GCP History Software Engineering

Google Releases TensorFlow – 2015 AD

Return to Timeline of the History of Computers


Google Releases TensorFlow

Makoto Koike (dates unavailable)

“Cucumbers are a big culinary deal in Japan. The amount of work that goes into growing them can be repetitive and laborious, such as the task of hand-sorting them for quality based on size, shape, color, and prickles. An embedded-systems designer who happens to be the son of a cucumber farmer (and future inheritor of the cucumber farm) had the novel idea of automating his mother’s nine-category sorting process with a sorting robot (that he designed) and some fancy machine learning (ML) algorithms. With Google’s release of its open source machine learning library, TensorFlow®, Makoto Koike was able to do just that.

TensorFlow, a deep learning neural network, evolved from Google’s DistBelief, a proprietary machine learning system that the company used for a variety of its applications. (Machine learning allows computers to find relationships and perform classifications without being explicitly programmed regarding the details.) While TensorFlow was not the first open source library for machine learning, its release was important for a few reasons. First, the code was easier to read and implement than most of the other platforms out there. Second, it used Python, an easy-to-use computer language widely taught in schools, yet powerful enough for many scientific computing and machine learning tasks. TensorFlow also had great support, documentation, and a dynamic visualization tool, and it was as practical to use for research as it was for production. It ran on a variety of hardware, from high-powered supercomputers to mobile phones. And it certainly didn’t hurt that it was a product of one of the world’s behemoth tech companies whose most valuable asset is the gasoline that fuels ML and AI—data.

These factors helped to drive TensorFlow’s popularity. The greater the number of people using it, the faster it improved, and the more areas in which it was applied. This was a good thing for the entire AI industry. Allowing code to be open source and sharing knowledge and data from disparate domains and industries is what the field needed (and still needs) to move forward. TensorFlow’s reach and usability helped democratize experimentation and deployment of AI and ML applications. Rather than being exclusive to companies and research institutions, AI and ML capabilities were now in reach of individual consumers — such as cucumber farmers.”

SEE ALSO: GNU Manifesto (1985), Computer Beats Master at Go (2016), Artificial General Intelligence (AGI) (~2050)

TensorFlow’s hallucinogenic images show the kinds of mathematical structures that neural networks construct in order to recognize and classify images.

Fair Use Sources: B07C2NQSPV

Knight, Will. “Here’s What Developers Are Doing with Google’s AI Brain.” MIT Technology Review, December 8, 2015.

Metz, Cade. “Google Just Open Sources TensorFlow, Its Artificial Intelligence Engine.” Wired online, November 9, 2015.

Artificial Intelligence DevSecOps-Security-Privacy History Networking Software Engineering

Over-the-Air Vehicle Software Updates – 2014 AD

Return to Timeline of the History of Computers


Over-the-Air Vehicle Software Updates

Elon Musk (b. 1971)

“In January 2014, the National Highway Traffic Safety Administration (NHTSA) published two safety recall notices for components in cars that could overheat and potentially cause fires. The first recall notice was for General Motors (GM) and required owners to physically take their cars to a dealership to correct the problem. The second was for Tesla Motors, and the recall was performed wirelessly, using the vehicle’s built-in cellular modem.

The remedy described by the NHTSA required Tesla to contact the owners of its 2013 Model S vehicles for an over-the-air (OTA) software update. The update modified the vehicle’s onboard charging system to detect any unexpected fluctuations in power and then automatically reduce the charging current. This is a perfectly reasonable course of action for what is essentially a 3,000-pound computer on wheels, but an OTA fix for a car? It was a seismic event for the automotive industry, as well as for the general public.

Tesla’s realization of OTA updates as the new normal for car maintenance was a big deal in and of itself. But the “recall” also provided an explicit example of how a world of smart, interconnected things will change the way people go about their lives and take care of the domestic minutiae that are part and parcel to the upkeep of physical stuff. It was also a glimpse into the future for many, including those whose jobs are to roll up their sleeves and physically repair cars. The event also called into question the relevance of NHTSA using the word recall, because no such thing actually took place, according to Tesla CEO Elon Musk. “The word ‘recall’ needs to be recalled,” Musk tweeted.

This was not the first time Tesla had pushed an update to one of its vehicles, but it was the most public, because it was ordered by a government regulatory authority. It also served as a reminder of the importance of computer security in this brave new connected world—although Tesla has assured its customers that cars will respond only to authorized updates.

Indeed, OTA updates will likely become routine for all cars for that very reason—timely security updates will be needed when hackers go after those 3,000-pound computers on wheels.”

SEE ALSO: Computers at Risk (1991), Smart Homes (2011), Subscription Software “Popularized” (2013), Wikileaks Vault 7 CIA Surveillance and Cyberwarfare (2017)

Don’t think of the Tesla as a car with a computer; think of it as a computer that has wheels.

Fair Use Sources: B07C2NQSPV

Cloud Data Science - Big Data DevSecOps-Security-Privacy History Software Engineering

Data Breaches – 2014 AD

Return to Timeline of the History of Computers


Data Breaches

“In 2014, data breaches touched individuals on a scale not seen before, in terms of both the amount and the sensitivity of the data that was stolen. These hacks served as a wake-up call to the world about the reality of living a digitally dependent way of life—both for individuals and for corporate data masters.”

“Most news coverage of data breaches focused on losses suffered by corporations and government agencies in North America—not because these systems were especially vulnerable, but because laws required public disclosure. High-profile attacks affected millions of accounts with companies including Target (in late 2013), JPMorgan Chase, and eBay. Midway through the year”, it was revealed that the Obama Administration’s United States Office of Personnel Management (OPM) was hacked via out-sourced contractors connected to the Chinese Communist government and “that highly personal (and sensitive) information belonging to 18 million former, current, and prospective federal and military employees had been stolen. Meanwhile, information associated with at least half a billion user accounts at Yahoo! was being hacked, although this information wouldn’t come out until 2016.”

Data from organizations outside the US was no less immune. The European Central Bank, HSBC Turkey, and others were hit. These hacks represented millions of victims across a spectrum of industries, such as banking, government, entertainment, retail, and health. While some of the industry and government datasets ended up online, available to the highest bidder in the criminal underground, many other datasets did not, fueling speculation and public discourse about why and what could be done with such data.

The 2014 breaches also expanded the public’s understanding about the value of certain types of hacked data beyond the traditional categories of credit card numbers, names, and addresses. The November 24, 2014, hack of Sony Pictures, for example, didn’t just temporarily shut down the film studio: the hackers also exposed personal email exchanges, harmed creative intellectual property, and rekindled threats against the studio’s freedom of expression, allegedly in retaliation for the studio’s decision to participate in the release of a Hollywood movie critical of a foreign government.

Perhaps most importantly, the 2014 breaches exposed the generally poor state of software security, best practices, and experts’ digital acumen across the world. The seams between the old world and that of a world with modern, networked technology were not as neatly stitched as many had assumed.”

SEE ALSO Morris Worm (1988), Cyber Weapons (2010)

Since 2014, high-profile data breaches have affected billions of people worldwide.

Fair Use Sources: B07C2NQSPV

History Software Engineering

Subscription Software “Popularized” – 2013 AD

Return to Timeline of the History of Computers


Subscription Software

“In 2013, Adobe stopped selling copies of its tremendously popular Photoshop and Illustrator programs and instead started to rent them. Microsoft and others would soon follow. The era of “subscription software” had arrived.

Despite providing many economically sound reasons why this move was in the interest of its customers (and of course equally good for the company’s bottom line), Adobe’s announcement was met with a wave of negativity and petitions to reinstate the traditional purchase model. Why? Because many customers didn’t upgrade their software every year, and they resented being put in the position of having to pay up annually or have their software stop working.

Purchasing subscriptions for digital services was not new—cable TV, streaming video, and telephone service are all sold by subscription. Software as a product, however, had been different since the birth of the microcomputer. Even though Adobe’s Photoshop is as much a series of 1s and 0s as a streaming movie, consumers did not experience it that way, because they traditionally did not receive it that way. Since it first went on sale in 1988, Photoshop had been sold as a physical object, packaged on floppy disk, CD, or DVD. It was a physical, tactile, or otherwise visible exchange of money for goods. But once the CD or DVD “packaging” of those 1s and 0s was replaced with the delivery of bits over a network connection, it was only a matter of time until the publisher decided to attach a time limit to that purchase. People were not just confused; they were downright furious.

Over time the advantages for most customers became clear: subscription software can be updated more often, and publishers can easily sell many different versions at different price points. The subscription model also gives consumers the flexibility to make small, incremental purchases without a big up-front investment. Now people who were interested in, but not committed to using, a professional photo-editing suite can spend $40 to try it out for a month, rather than spending thousands of dollars up front for a product suite that might not precisely align with their needs or interests. The advent of subscription software brought with it a new model for enabling fast evolution and innovation in software products that in turn drove competition across a landscape of ecommerce services.”

SEE ALSO: Over-the-Air Vehicle Software Updates (2014)

Purchasing subscriptions for popular software programs has become increasingly popular, replacing the previous model of buying and owning the software.

Fair Use Sources: B07C2NQSPV

Pogue, David. “Adobe’s Software Subscription Model Means You Can’t Own Your Software.” Scientific American online, October 13, 2013.

Whitler, Kimberly A. “How the Subscription Economy Is Disrupting the Traditional Business Model.” Forbes online, January 17, 2016.

Artificial Intelligence Data Science - Big Data History

Algorithm Influences Prison Sentence – 2013 AD

Return to Timeline of the History of Computers


Algorithm Influences Prison Sentence

“Eric Loomis was sentenced to six years in prison and five years’ extended supervision for charges associated with a drive-by shooting in La Crosse, Wisconsin. The judge rejected Loomis’s plea deal, citing (among other factors), the high score that Loomis had received from the computerized COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) risk-assessment system.

Loomis’ lawyers appealed his sentence on the grounds that his due process was violated, as he did not have any information into how the algorithm derived his score. As it turns out, neither did the judge. And the creators of COMPAS — Northpointe Inc. — refused to provide that information, claiming that it was proprietary. The Wisconsin Supreme court upheld the lower court’s ruling against Loomis, reasoning that the COMPAS score was just one of many factors the judge used to determine the sentence. In June 2017, the US Supreme Court decided not to give an opinion on the case, after previously inviting the acting solicitor general of the United States to file an amicus brief.

Data-driven decision-making focused on predicting the likelihood of some future behavior is not new — just ask parents who pay for their teenager’s auto insurance or a person with poor credit who applies for a loan. What is relatively new, however, is the increasingly opaque reasoning that these models perform as a consequence of the increasing use of sophisticated statistical machine learning. Research has shown that hidden bias can be inadvertently (or intentionally) coded into an algorithm. Illegal bias can also result from the selection of data fed to the data model. An additional question in the Loomis case is whether gender was considered in the algorithm’s score, a factor that is unconstitutional at sentencing. A final complicating fact is that profit-driven companies are neither required nor motivated to reveal any of this information.

State v. Loomis helped raise public awareness about the use of “black box” algorithms in the criminal justice system. This, in turn, has helped to stimulate new research into development of “white box” algorithms that increase the transparency and understandability of criminal prediction models by a nontechnical person.”

SEE ALSO: DENDRAL (1965), The Shockwave Rider (1975)

Computer algorithms such as the COMPAS risk-assessment system can influence the sentencing of convicted defendants in criminal cases.

Fair Use Sources: B07C2NQSPV

Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias” ProPublica, May 23, 2016.

Eric L. Loomis v. State of Wisconsin, 2015AP157-CR (Supreme Court of Wisconsin, October 12, 2016).

Harvard Law Review. “State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing.” Vol. 130 (March 10, 2017): 1530–37.

Liptak, Adam. “Sent to Prison by a Software Program’s Secret Algorithms.” New York Times online, May 1, 2017.

Pasquale, Frank. “Secret Algorithms Threaten the Rule of Law.” MIT Technology Review, June 1, 2017.

State of Wisconsin v. Eric L. Loomis 2015AP157-CR (Wisconsin Court of Appeals District IV, September 17, 2015).

Data Science - Big Data History

DNA Data Storage – 2012 AD

Return to Timeline of the History of Computers


DNA Data Storage

George Church (b. 1954), Yuan Gao (dates unavailable), Sriram Kosuri (dates unavailable), Mikhail Neiman (1905–1975)

“In 2012, George Church, Yuan Gao, and Sriram Kosuri, all with the Harvard Medical School’s Department of Genetics, announced that they had successfully stored 5.27 megabits of digitized information in strands of deoxyribonucleic acid (DNA), the biological molecule that is the carrier of genetic information. The stored information included a 53,400-word book, 11 JPEG images, and a JavaScript program. The following year, scientists at the European Bioinformatics Institute (EMBL-EBI) successfully stored and retrieved an even larger amount of data in DNA, including a 26-second audio clip of Martin Luther King’s “I Have a Dream” speech, 154 Shakespeare sonnets, the famous Watson and Crick paper on DNA structure, a picture of EMBL-EBI headquarters, and a document that described the methods the team used to accomplish the experiment.

Although first demonstrated in 2012, the concept of using DNA as a recording, storage, and retrieval mechanism goes back to 1964, when a physicist named Mikhail Neiman published the idea in the Soviet journal Radiotekhnika.

To accomplish this storage and retrieval, first a digital file represented as 1s and 0s is converted to the letters A, C, G, and T. These letters are the four chemical bases that make up DNA. The resulting long string of letters is then used to manufacture synthetic DNA molecules, with the sequence of the original bits corresponding to the sequence of nucleic acids. To decode the DNA and reconstitute the digital file, the DNA is put through a sequencing machine that translates the letters back into the original 1s and 0s of the original digital files. Those files can then be displayed on a screen, played through a speaker, or even run on a computer’s CPU.

In the future, DNA could allow digital archives to reliably store vast amounts of digitized data: a single gram of DNA has the potential to store 215 million gigabytes of data, allowing all the world’s information to be stored in a space the size of a couple of shipping containers.”

SEE ALSO Magnetic Tape Used for Computers (1951), DVD (1995)

To store information in DNA, a digital file represented as 1s and 0s is converted to the letters A, C, G, and T, the four chemical bases that make up DNA.

Fair Use Sources: B07C2NQSPV


Social Media Enables the Arab Spring – 2011 AD

Return to Timeline of the History of Computers


Social Media Enables the Arab Spring

Mohamed Bouazizi (1984–2011)

“On December 17, 2010, a 26-year-old Tunisian street vendor named Mohamed Bouazizi set himself on fire in protest over the harassment and public humiliation he received at the hands of the local police for refusing to pay a bribe. Protests following the incident were recorded by participants’ cell phones. In the era of globally networked information systems, Bouazizi’s story spread much further than the people at the rally or Bouazizi’s family could have mustered on their own. After the video was uploaded to Facebook, it was shared and reshared across the social media sphere. The Facebook pages and the subsequent social media pathways the video appeared on were critical enablers that galvanized people to action on a scale they otherwise could not have achieved.

This incident is often cited as the catalyst for the Tunisian Revolution. Leveraging the digital diffusion capabilities of computer technology, including Facebook and Twitter, individuals and groups organized and coordinated real-world demonstrations, then posted the results for the world to see. The digital platform amplified and extended the emotional impact of these experiences to many others, who in turn shared the content even further.

Videos of protests might have been the spark that lit the uprising, but the pressure had been building since November 28, 2010, when allegedly leaked US government cables that discussed the corruption of Tunisia’s ruling elites started circulating on the internet. The government of president Zine El Abidine Ben Ali, who had been in office since 1987, fell on January 14, 2011, triggering demonstrations, coups, and civil wars elsewhere in northern Africa and the Middle East, in what is now known as the Arab Spring.

Immediate causalities of the Arab Spring included Egypt’s president, Hosni Mubarak, who was pushed out of office by a popular revolt on February 11, 2011, after nearly 30 years of authoritarian rule, and Libya’s ruler, Muammar Mohammed Abu Minyar Gaddafi, who had seized power in 1969 and was executed by rebels on October 20, 2011.”

SEE ALSO Facebook (2004)

Protesters charging their all-important mobile phones in Tahrir Square, in Cairo, Egypt.

Fair Use Sources: B07C2NQSPV

History Networking

World IPv6 Day – 2011 AD

Return to Timeline of the History of Computers


World IPv6 Day

“Every computer on the internet has an Internet Protocol (IP) address, a number that the internet uses to route network packets to the computer. When internet engineers adopted Internet Protocol Version 4 (IPv4) in 1984, they thought that 32 bits would be sufficient, because it allowed for 232 = 4,294,967,296 possible computers. Back then, that seemed like enough.

As things turned out, 4 billion addresses were nowhere near enough. Many early internet adopters got unreasonably large blocks of addresses—MIT got 224 = 16,777,216 of them! But to realize the dreams of a fully networked society, every cell phone—indeed, every light bulb—would potentially need its own address. Even properly allocated, 32 bits just wouldn’t be enough.

Throughout the 1990s, internet infrastructure engineers periodically warned that the internet was running out of address space. In 1998, the Internet Engineering Task Force (IETF) officially published version 6 of the IP specification (IPv6). The new protocol used 128-bit addresses, allowing a maximum of 2128 addresses. To get an idea of how fantastically large this number is, it is considerably larger than the number of grains of sand on the earth (estimated at 263) or stars in the sky (276).

IPv6 is similar to IPv4, but it is fundamentally incompatible. Thousands of programs had to be rewritten, and millions of computers needed to be upgraded.

Early efforts to turn on IPv6 failed: so many systems were misconfigured or simply missing IPv6 support that flipping the switch resulted in users losing service.

Then, in January 2011, there were no new IPv4 addresses to hand out.

On January 12, 2011, more than 400 companies, including the internet’s largest providers, enabled IPv6 for the first time on their primary servers. It was the final test, and this time it (mostly) worked. Called World IPv6 Day, the event lasted 24 hours. After analyzing the data, the leading participants declared that no serious service interruptions had been experienced, but more work needed to be done. The following year, they turned it on for good.

Today IPv4 and IPv6 coexist on the internet, and when you connect to a host such as Google or Facebook, there’s a good chance your connection is traveling over IPv6.”

SEE ALSO IPv4 Flag Day (1983)

With IPv6, there are enough internet addresses for all of the stars in the sky and all of the grains of sand on the earth.

Fair Use Sources: B07C2NQSPV

Artificial Intelligence Data Science - Big Data History

IBM’s Watson Wins Jeopardy! – 2011 AD

Return to Timeline of the History of Computers


Watson Wins Jeopardy!

David Ferrucci (b. 1962)

“For all of the mathematical accomplishments that computers are capable of, a machine that engages people in conversation is still the work of fiction and computer scientists’ dreams. When IBM’s Watson® beat the two best-ever Jeopardy! players—Ken Jennings and Brad Rutter—the dream seemed a little more real. Indeed, when Jennings realized he had lost, he tweaked a line from an episode of The Simpsons to display on his screen: “I, for one, welcome our new computer overlords.”

Unlike chess, for which IBM’s Deep Blue demonstrated domination when it beat the world’s best chess player, Garry Kasparov, in 1996, Jeopardy! is not a game governed by clear and objective rules that translate into mathematical calculations and statistical models. It’s a game governed by finding answers in language—a messy, unstructured, ambiguous jumble of symbols that humans understand as a result of context, culture, inference, and a vast corpus of knowledge acquired by virtue of being a human and having a lifetime of sensory experiences. Designing a computer that could beat a person at this game was a really big deal.

Watson was designed over several years using a 25-person team of multidisciplinary experts in fields that included natural language processing, game theory, machine learning, informational retrieval, and computational linguistics. The team accomplished much of its work in a common war room where the exchange of diverse ideas and perspectives enabled faster and more incremental progress than may have occurred using a more traditional research approach. The goal was not to model the human brain but to “build a computer that can be more effective in understanding and interacting in natural language, but not necessarily the same way humans do it,” according to David Ferrucci, Watson’s lead designer.

Watson’s success was not due to any one breakthrough, but rather incremental improvements in cognitive computing along with other factors, including the massive supercomputing capabilities of speed and memory that IBM could direct to the project, more than 100 algorithms the team had working in parallel to analyze questions and answers, and the corpus of millions of electronic documents Watson ingested, including dictionaries, literature, news reports, and Wikipedia.”

SEE ALSO Computer Is World Chess Champion (1997), Wikipedia (2001), Computer Beats Master at Go (2016)

Contestants Ken Jennings and Brad Rutter compete against Watson at a press conference before the “Man v. Machine” Jeopardy! competition at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York.

Fair Use Sources: B07C2NQSPV

Thompson, Clive. “What is I.B.M.’s Watson?” New York Times online, June 16, 2010.