“Guido van Rossum (Dutch: [ˈɣido vɑn ˈrɔsʏm, -səm]; born 31 January 1956) is a Dutch programmer best known as the creator of the Python programming language, for which he was the “Benevolent dictator for life” (BDFL) until he stepped down from the position in July 2018.[5][6] He remained a member of the Python Steering Council through 2019, and withdrew from nominations for the 2020 election.[7]” (WP)
1909 – Charles Herrold in San Jose started first radio station in USA with regularly scheduled programming, including songs, using an arc transmitter of his own design. Herrold was one of Stanford’s earliest students and founded his own College of Wireless and Engineering in San Jose
1935 – Fujitsu founded as Fuji Telecommunications Equipment Manufacturing in Japan. Fujitsu is the second oldest IT company after IBM and before Hewlett-Packard
1961 – Shinshu Seiki Company founded in Japan (now called Seiko Epson Corporation) as a subsidiary of Seiko to supply precision parts for Seiko watches.
1968 – Dot Matrix Printer – Shinshu Seiki (now called Seiko Epson Corporation) launched the world’s first mini-printer, the EP-101 (“EP” for Electronic Printer,) which was soon incorporated into many calculators
1973 – Danny Cohen first demonstrated a form of packet voice as part of a flight simulator application, which operated across the early ARPANET.[69][70]
1973 – Xerox Alto from Xerox Palo Alto Research Center (PARC)
1975 – The name Epson was coined for the next generation of printers based on the EP-101 which was released to the public. (EPSON:E-P-SON: SON of Electronic Printer).[7]Epson America Inc. was established to sell printers for Shinshu Seiki Co.
1977 – Danny Cohen and Jon Postel of the USC Information Sciences Institute, and Vint Cerf of the Defense Advanced Research Projects Agency (DARPA), agree to separate IP from TCP, and create UDP for carrying real-time traffic.
Telefonaktiebolaget L M Ericsson (lit. Telephone Stock Company L.M. Ericsson), commonly known as Ericsson, is a Swedish multinational networking and telecommunications company headquartered in Stockholm. The company offers services, software and infrastructure in information and communications technology for telecommunications operators, traditional telecommunications and Internet Protocol (IP) networking equipment, mobile and fixed broadband, operations and business support services, cable television, IPTV, video systems, and an extensive services operation.
Ericsson had a 27% market share in the 2G/3G/4G mobile network infrastructure market in 2018, thus being the largest such non-Chinese company.[3]
The company was founded in 1876 by Lars Magnus Ericsson[4] and was taken over by the Wallenberg family in 1960; today, the family, through its holding company Investor AB, owns a controlling 22.53% voting power. As of 2016 it is headquartered in Stockholm, Sweden. The company employs around 95,000 people and operates in around 180 countries.[5][6] Ericsson holds over 49,000 granted patents as of September 2019, including many in wireless communications.[7] Ericsson is the inventor of Bluetooth technology.[8] Ericsson leads the implementation of 5G worldwide, partly through the use of massive MIMO technology.[9][10]
“At the same time, San Francisco’s inhabitants showed a voracious interest in the radio technology invented in Europe at the turn of the century. The Italian inventor Guglielmo Marconi, then in Britain, had galvanized the sector with his long-distance radio transmissions, beginning in 1897 and culminating with the radio message from the US President Theodore Roosevelt to the British king Edward VII of 1903. Marconi’s company set up radio stations on both sides of the Atlantic to communicate with ships at sea. However, it was not yet trivial how to create a wireless communication system.”
“In 1906 an independent with a degree from Yale, Lee DeForest, had built a vacuum tube in New York without quite understanding its potential as a signal amplifier. In fact his invention, the “audion”, was useful to amplify electrical signals, and therefore to wireless transmissions. (In 1904 the British chemist John-Ambrose Fleming had invented the two-element amplifier, or “diode”, and a few months before DeForest the Austrian physicist Robert von Lieben had already built a three-element amplifier, or “triode”). In 1910 DeForest moved to San Francisco and got into radio broadcasting, a business that he had pioneered in January when he had broadcast from New York a live performance by legendary Italian tenor Enrico Caruso. In fact, DeForest is the one who started using the term “radio” to refer to wireless transmission when he formed his DeForest Radio Telephone Company in 1907. However, his early broadcasts did not use the audion yet. Interest in radio broadcasting was high in the Bay Area, even if there were no mass-produced radios yet. A year earlier, in 1909, Charles Herrold in San Jose had started the first radio station in the US with regularly scheduled programming, including songs, using an arc transmitter of his own design. Charles Herrold had been one of Stanford’s earliest students and founded his own College of Wireless and Engineering in San Jose.
The Bay Area stumbled into electronics almost by accident. In 1909 another Stanford alumnus, Cyril Elwell, had founded the Poulsen Wireless Telephone and Telegraph Company in Palo Alto, later renamed the Federal Telegraph Corporation (FTC), to commercialize a new European invention. In 1903 the Danish engineer Valdemar Poulsen invented an arc transmitter for radio transmission, but no European company was doing anything with it. Elwell understood its potential was not only technological but also legal: it allowed him to create radio products without violating Marconi’s patents. Elwell acquired the US rights for the Poulsen arc. His radio technology, adequately funded by a group of San Francisco investors led by Beach Thompson, blew away the competition of the East Coast. In 1912 he won a contract with the Navy, which was by far the biggest consumer of radio communications. Thus commercial radiotelegraphy developed first in the US. The “startup” was initially funded by Stanford’s own President, David Starr Jordan, and employed Stanford students, notably Edwin Pridham. Jordan had just inaugurated venture-capital investment in the region.
In need of better receiver amplifiers for the arc transmissions, FTC hired Lee DeForest, who by 1912 had finally realized that his audion could be used as an amplifier. The problem with long-distance telephone and radio transmissions was that the signal was lost en route as it became too faint. DeForest’s vacuum tube enabled the construction of repeaters that restored the signal at intermediate points. The audion could dramatically reduce the cost of long-distance wireless communications. FTC began applying the audion to develop a geographically distributed radiotelegraphy system. The first tower they had built, in July 1910, was on a San Francisco beach and it was 90 meters tall. Yet the most impressive of all was inaugurated in 1912 at Point San Bruno (just south of the city), a large complex boasting the tallest antenna in the world (130 meters).
By the end of 1912 FTC had grown; it had stations in Texas, Hawaii, Arizona, Missouri and Washington besides California. However, the Poulsen arc remained the main technology for radiotelephony (voice transmission) and, ironically, FTC was no longer in that business. Improvements to the design by recent Cornell graduate Leonard Fuller (mostly during World War I, when the radio industry was nationalized to produce transmitters for the Navy) that allowed the audion to amplify a signal a million times eventually led FTC to create the first global wireless communication system. The audion was still used only for receivers, while most transmitters were arc-based. It was only in 1915 that DeForest realized that a feedback loop of audions could be used to build transmitters as well. DeForest had already (in 1913) sold the patent for his audion to Graham Bell’s AT&T in New York, and AT&T had already used it to set up the first coast-to-coast telephone line (January 1915), just in time for the Panama-Pacific International Exposition. Meanwhile, DeForest had moved to New York. There, in 1916, he stunned the nation by broadcasting the results of the presidential elections with music and commentary from New York to stations within a range of 300 kilometers, and this time using an audion transmitter. Radiotelephony would switch from the Poulsen arc to his audion during the 1920s. In due time Leo Fuller took Elwell’s place as chief engineer of FTC, and in 1920 Navy engineer and former Marconi engineer Haraden Pratt was hired to launch commercial wireless telegraph service, and sugar magnate Rudolph Spreckels bought control of FTC.
The wireless industry was booming throughout the US, aided by sensational articles in the mainstream press. Earle Ennis had opened a company (Western Wireless Equipment Company) to sell wireless equipment for ships. He also ran a radio broadcast to deliver news to ships at sea. In 1910 he organized the first air-to-ground radio message, thus showing that the same technology could be used by the nascent airline industry.
Because of its maritime business, the Bay Area became one of the largest centers for amateur radio. The Bay Counties Wireless Telegraph Association was founded in 1907 by (then) amateurs such as Haraden Pratt, Ellery Stone and Lewis Clement.
Quite a bit of innovation in radio engineering came from the “ham” radio amateurs. The first wireless communications were, by definition, done by independents who set up their own equipment. This was the first “virtual” community as they frequently never met in person. The first magazine devoted to radio engineering, Modern Electrics, was launched in April 1908 in New York by Hugo Gernsback, a 24-year-old Jewish immigrant from Luxembourg. It reached a circulation of 52,000 in 1911, the year when it started publishing science-fiction stories (thus also becoming de facto the first science-fiction magazine). Amateur wireless associations popped up throughout the country, such as the Radio Club of Salt Lake City in Utah, founded in September 1909, and the Wireless Association of Central California, formed in May 1910 in Fresno. From a social point of view, the beauty of ham radio was that it blurred class boundaries: they were known by codes such as 6ZAF, not by their last names, and it made no difference whether they were rural teenagers, Stanford PhD students or professional radio engineers. They were all on the same level.
Among the amateurs of the second decade were Charlie Litton, an eleven-year old prodigy who operated an amateur station in Redwood City in 1915, and Frederick Terman, a teenager who operated an amateur station in Palo Alto in 1917. Some of those amateurs went on to create small companies. Little did they know that their hobby would in time of war constitute a strategic industry for the Air Force, Navy and Army: during World War I (in 1918) Elwell’s technology would be a pillar of naval communications for the US. The Navy had set up radio stations all over the place. In January 1918 the President of the US, Woodrow Wilson, proudly spoke live to Europe, the Far East and Latin America.
Magnavox Corp. was founded in 1910 in Napa (north of the bay). It was the brainchild of Peter Jensen (one of the Danish engineers imported by FTC to commercialize the Poulsen arc) and Edwin Pridham (a Stanford graduate who also worked at FTC). In 1917 they introduced a new type of electrical loudspeaker.
Alas, after World War I it became obvious that radio technology was strategic, and it couldn’t be left in the hands of West-Coast independents. The US government basically forced a large East-Coast company, General Electric, to buy the US business of Marconi. The US government also helped the new company to acquire the most important radio patents. Thus a new giant, RCA, was born and soon became the dominant player in consumer electronics, as the number of radios grew from 5,000 in 1920 to 25 million in 1924. Hence FTC was doomed and other Bay Area-based radio companies had to live with only military applications.
Ham-radio amateurs were the first “garage nerds” of the San Francisco Bay Area, a place isolated from the rest of the country (reaching any other city required a long journey by ship, by train or by coach). Bill Eitel presided the Santa Clara County Amateur Radio Association, formed in 1921, before he went on to launch his own “startup”. The First National Radio Conference took place in Washington in February 1922, and it pitted the five big corporations that owned all the patents (American Telephone & Telegraph, General Electric, Western Electric, Westinghouse and RCA) against the ham-radio amateur clubs. That conference established their legal legitimacy. A few weeks later, in April 1922, the first transpacific two-way amateur communication was established between 6ZAC (Clifford Down) in Hawaii and 6ZAF (A.H. Babcock) in Berkeley. The ham-radio operators became heroes in countless cases of natural disasters, especially in the Western states, at a time when there was no other way to communicate rapidly with the aid workers. A teenager, known as 6BYQ, sent out the first alarm when a dam broke in 1928 in Santa Paula, near Los Angeles, causing a flood that caused massive destruction. Ham-radios helped in September 1932 when a landslide wiped out the mining town of Tehachapi, east of Los Angeles, and in March 1933 when an earthquake struck Long Beach, south of Los Angeles. Ham-radios were the first “consumers” of the vacuum tubes made in the Bay Area.
Radio engineering created two worlds in the Bay Area that would greatly influence its future: a high-tech industry and a community of high-tech amateurs.
“Each generation of technology has seen faster computations, larger storage systems, and improved communications bandwidth. Nevertheless, physics may impose fundamental limits on computing systems that cannot be overcome. The most obvious limit is the speed of light: a computer in New York City will never be able to request a web page from a server in London and download the results with a latency of less than 0.01 seconds, because light takes 0.0186 seconds to travel the 5,585 kilometers each direction, consistent with Einstein’s Theory of Special Relativity. On the other hand, recently some scientists have claimed that they can send information without sending light particles by using quantum entanglement, something Einstein dismissively called spooky action at a distance. Indeed, in 2013, scientists in China measured the speed of information propagation due to quantum entanglement and found that it was at least 10,000 times faster than the speed of light.
Computation itself may also have a fundamental limit, according to Seth Lloyd, a professor of mechanical engineering and physics at MIT. In 2000, Lloyd showed that the ultimate speed of a computer was limited by the energy that it had available for calculations. Assuming that the computations would be performed at the scale of individual atoms, a central processor of 1 kilogram occupying the volume of 1 liter has a maximum speed of 5.4258 × 1050 operations per second—roughly 1041, or a billion billion billion billion times faster than today’s laptops.
Such speeds may seem unfathomable today, but Lloyd notes that if computers double in speed every two years, then this is only 250 years of technological progress. Lloyd thinks that such technological progress is unlikely. On the other hand, in 1767, the fastest computers were humans.
Because AI is increasingly able to teach and train itself across all technological and scientific domains—doing so at an exponential rate while sucking in staggering amounts of data from an increasingly networked and instrumented world—perhaps it is appropriate that a question mark be the closing punctuation for the title of this entry.”
Based on our current understanding of theoretical physics, a computer operating at the maximum speed possible would not be physically recognizable by today’s standards. It would probably appear as a sphere of highly organized mass and energy.
George Church (b. 1954), Yuan Gao (dates unavailable), Sriram Kosuri (dates unavailable), Mikhail Neiman (1905–1975)
“In 2012, George Church, Yuan Gao, and Sriram Kosuri, all with the Harvard Medical School’s Department of Genetics, announced that they had successfully stored 5.27 megabits of digitized information in strands of deoxyribonucleic acid (DNA), the biological molecule that is the carrier of genetic information. The stored information included a 53,400-word book, 11 JPEG images, and a JavaScript program. The following year, scientists at the European Bioinformatics Institute (EMBL-EBI) successfully stored and retrieved an even larger amount of data in DNA, including a 26-second audio clip of Martin Luther King’s “I Have a Dream” speech, 154 Shakespeare sonnets, the famous Watson and Crick paper on DNA structure, a picture of EMBL-EBI headquarters, and a document that described the methods the team used to accomplish the experiment.
Although first demonstrated in 2012, the concept of using DNA as a recording, storage, and retrieval mechanism goes back to 1964, when a physicist named Mikhail Neiman published the idea in the Soviet journal Radiotekhnika.
To accomplish this storage and retrieval, first a digital file represented as 1s and 0s is converted to the letters A, C, G, and T. These letters are the four chemical bases that make up DNA. The resulting long string of letters is then used to manufacture synthetic DNA molecules, with the sequence of the original bits corresponding to the sequence of nucleic acids. To decode the DNA and reconstitute the digital file, the DNA is put through a sequencing machine that translates the letters back into the original 1s and 0s of the original digital files. Those files can then be displayed on a screen, played through a speaker, or even run on a computer’s CPU.
In the future, DNA could allow digital archives to reliably store vast amounts of digitized data: a single gram of DNA has the potential to store 215 million gigabytes of data, allowing all the world’s information to be stored in a space the size of a couple of shipping containers.”
To store information in DNA, a digital file represented as 1s and 0s is converted to the letters A, C, G, and T, the four chemical bases that make up DNA.
“After the US government adopted the Data Encryption Standard (DES) in 1977, it quickly became the most widely used encryption algorithm in the world. But from the start, there were concerns about the algorithm’s security. DES had an encryption key of just 56 bits, which meant there were only 72,057,594,037,927,936 possible encryption keys, leaving experts to speculate whether anyone with the means had built special-purpose computers for cracking DES-encrypted messages.
DES had other problems. Designed to be implemented in hardware, software implementations were surprisingly slow. As a result, many academic cryptographers proposed new ciphers in the 1980s and 1990s. These algorithms found increasing use—in web browsers, for instance—but none had the credence that came with having gone through the government’s standards-making process.
So, in 1997, the US National Institute of Standards and Technology (NIST) announced a multiyear competition to decide upon the nation’s next encryption standard. NIST invited cryptographers all over the world to submit not only their best algorithms, but their recommendations for how the algorithms should be evaluated.
Adding another nail to the DES coffin, in 1998 the Electronic Frontier Foundation (EFF), a tiny civil liberties organization, announced that it had built one of those mythical DES-cracking machines, and for less than $250,000. Called Deep Crack, the machine could try 90 billion DES keys a second, allowing it to crack, on average, a DES-encrypted message in just 4.6 days.
In total, there were 15 credible submissions from nine different countries to the NIST contest. After considerable public analysis and three public conferences, the winner was decided in 2001: an algorithm called Rijndael, developed by two Belgian cryptographers, Vincent Rijmen and Joan Daemen. Rijndael is now called the Advanced Encryption Standard (AES). It can be run with 128-bit, 192-bit, or 256-bit keys, allowing for unprecedented levels of security. It can run on tiny 8-bit microcontrollers, and nearly all modern microprocessors now have special AES instructions, allowing them to encrypt at blindingly fast speeds.”
ALGOL (/ˈælɡɒl, -ɡɔːl/; short for “Algorithmic Language“)[1] is a family of imperative computer programming languages originally developed in 1958. ALGOL heavily influenced many other languages and was the standard method for algorithm description used by the Association for Computing Machinery (ACM) in textbooks and academic sources until object-oriented languages came around, for more than thirty years.[2]
In the sense that the syntax of most modern languages is “Algol-like”,[3] it was arguably the most influential of the four high-level programming languages among which it was roughly contemporary: FORTRAN, Lisp, and COBOL.[4] It was designed to avoid some of the perceived problems with FORTRAN and eventually gave rise to many other programming languages, including PL/I, Simula, BCPL, B, Pascal, and C.
Tom Kilburn (1921–2001), Richard Grimsdale (1929–2005), Douglas Webb (b. 1929), Jean H. Felker (1919–1994)
“With the invention of the transistor in 1947, the next step was to use it as a replacement for the vacuum tube. Tubes had a significant advantage compared to relays—they were a thousand times faster—but tubes required an inordinate amount of electricity, produced huge amounts of heat, and failed constantly. Transistors used a fraction of the power, produced practically no heat at all, and were more reliable than tubes. And because transistors were smaller than tubes, a transistorized machine would run inherently faster, because electrons had a shorter distance to move.
The University of Manchester demonstrated its prototype transistorized computer on November 16, 1953. The machine made use of the “point-contact” transistor, a piece of germanium that was in contact with two wires held in very close proximity to each other—the two “points.” The Manchester machine had 92 point-contact transistors and 550 diodes. The system had a word size of 48 bits. (Many of today’s microprocessors can operate on words that are 8, 16, 32, or 64 bits.) A few months later, Jean H. Felker at Bell Labs created the TRADIC (transistor digital computer) for the US Air Force, with 700 point-contact transistors and more than 10,000 diodes.
This point-contact transistor was soon replaced by the bipolar junction transistor, so named because it is formed by a junction involving two kinds of semiconductors. Manchester updated its prototype in 1955 with a new design that used 250 of these junction transistors. Called the Metrovick 950, that computer was manufactured by Metropolitan-Vickers, a British electrical engineering company.
In 1956, the Advanced Development Group at MIT Lincoln Lab used more than 3,000 transistors to build the TX-0 (Transistorized eXperimental computer zero), a transistorized version of the Whirlwind and the forerunner to Digital Equipment Corporation’s (DEC) PDP-1.”
“By 1951, the basic structure of stored-program computers had been worked out: a central processing unit (CPU) that had registers for storing numbers, an arithmetic logic unit (ALU) for performing mathematical operations, and logic for moving data between the CPU and memory. But the internal design of these early CPUs was a mess. Each instruction was implemented with a different set of wires and circuits, some with components in common, and others with their own individual logic.
British computer scientist Maurice Wilkes realized that the design of the CPU could be made more regular after seeing the design of the Whirlwind, which was controlled by a crisscrossing matrix of wires. Some of the wires connected by a diode where they crossed. Voltage was applied to each horizontal wire in sequence. If a diode was present, the corresponding vertical wire would be energized and activate different parts of the CPU.
Wilkes realized that each line of the diode matrix in the Whirlwind could be viewed as a set of microoperations that the CPU followed, a kind of “microprogram.” He formalized this idea in a lecture at the 1951 Manchester University Computer Inaugural Conference, immodestly titled “The Best Way to Design an Automatic Calculating Machine.” In the lecture, later published by the university, Wilkes proposed that his idea might seem at once obvious, because it described nothing more than a formalized way of creating a CPU using the same basic wires, diodes, and electronic switches that were already in use, as well as extravagant, because it might use more components than would be used otherwise. But, Wilkes argued, it resulted in a system that was easier to design, test, and extend.
Wilkes was right. Microprogramming dramatically simplified the creation of CPUs, allowing instruction sets to become more complex. It also created unexpected flexibility: when IBM released System/360 in 1964, its engineers used microprogramming to allow the new computers to emulate the instructions of the IBM 1401, making it easier for customers to make the transition.”
SEE ALSO Whirlwind (1949), IBM 1401 (1959), IBM System/360 (1964)
“Maurice Wilkes (front left), designer of the EDSAC, one of the earliest stored-program electronic computers.”
Frederic Calland Williams (1911–1977), Tom Kilburn (1921–2001)
“The defining characteristic of the digital computer is that it stores both program and data in a single memory bank. In a modern computer, this arrangement lets one program load a second program into memory and execute it. On the limited-memory machines of the 1950s, intermixing programs and code made it possible to squeeze out more functionality by writing programs that literally modified themselves, now called self-modifying code. Modern computers use this ability to load code into the computer’s memory and execute it—the fundamental capability that makes a computer a general-purpose machine. But none of the machines built before the Manchester Small-Scale Experimental Machine (SSEM) were actually digital computers, at least not in the modern sense. Either they were hardwired to perform a particular calculation, like the Atanasoff-Berry Computer, they read their instructions from some kind of punched tape, like the Konrad Zuse machines, or the program was set on wires and switches, like ENIAC. They were really calculators, not computers.
The SSEM, nicknamed Baby by its creators at the University of Manchester, was built for testing and demonstrating the storage tube that Frederic Williams had designedWilliams Tube Random Access Memory (RAM) – 1946 A.D. in 1946. Baby filled a 20-foot-square room and consisted of eight racks of equipment, the Williams storage tube, many radio tubes, and meters that reported voltages. Each tube had 1,024 bits. As the program ran and changed what was stored in its memory, the arrangement of dots on the storage tube changed.
Because the program was stored in memory, and relied on self-modifying code, it was easy for Kilburn to make changes. The first program that Baby successfully ran, written by Kilburn, was designed to find the highest factor of 218 (262,144). The program ran in 52 minutes and found the right answer: 217 (131,072), averaging 1.5 milliseconds per instruction. The original program was just 17 instructions long.
Arriving at the correct answer was no easy feat. As Williams reportedly stated, “The spots on the display tube entered a mad dance. In early trials, it was a dance of death leading to no useful result . . . But one day it stopped, and there, shining brightly in the expected place, was the expected answer.””
“Recreation of the Manchester Small-Scale Experimental Machine (a.k.a., the Manchester “Baby”) at the Museum of Science and Industry in Manchester, UK.”
“The Curta is perhaps the most elegant, compact, and functional mechanical calculator ever manufactured. Designed by Austrian engineer Curt Herzstark, it is the only digital mechanical pocket calculator ever invented. Handheld and powered by a crank on the top, the Curta can add, subtract, multiply, and divide.
Curt Herzstark’s father, Samuel Jacob Herzstark, was a highly regarded Austrian importer and manufacturer of mechanical calculators and other precision instruments. Herzstark finished high school and apprenticed at his father’s company, which he took over when his father died in 1937.
At the time, mechanical calculators were big and heavy desktop affairs. After one of Herzstark’s customers complained that he didn’t want to go back to the office just to add up a column of numbers, Herzstark started designing a handheld calculator. He had an early prototype working in January 1938, just two months before Germany invaded and annexed Austria. Despite Herzstark being half-Jewish, the Nazis let him continue to operate the factory, provided that it cease all civilian production and devote itself to creating devices for the Reich.
In 1943, two of Herzstark’s employees were arrested for distributing transcripts of English radio broadcasts; Herzstark was subsequently arrested for aiding the employees and for “indecent contact with Aryan women.” He was sent to the Buchenwald concentration camp, where he was recognized by one of his former employees, who was now a guard. The guard told the head of the camp’s factory about the mechanical calculator. The Germans then instructed Herzstark to finish his project, so that the camp could give the device to Hitler as a present after Germany won the war. That never happened: Buchenwald was liberated on April 11, 1945, and Hitler killed himself 19 days later.
After liberation, Herzstark took the drawings he had done at the camp to a machine shop and had three working prototypes eight weeks later. The first calculators were produced commercially in the fall of 1948.”