San Francisco Bay Area Culture and Society – 1849 to 1919 AD

Return to Timeline of the History of Computers or History

Culture and Society

“In the 1890s most of California was still unexplored. A vast area of the state, the Sierra Nevada, was virtually inaccessible. The fascination for finding out what lay inside drew men from all backgrounds but especially scientists. In 1860 (just a few years after becoming a state of the USA) California had created the Office of State Geologist and had hired Josiah Whitney, professor of geology at Harvard University, to lead it. Between 1863 and 1864 Whitney had led expeditions that had included botanists, zoologists, paleontologists and topographers to explore the High Sierra, “discovering” what today are known as Yosemite and King’s Canyon national parks. Another geologist, Clarence King, who had traveled overland to California in 1863 from Yale, had become the first white man to spot Mount Whitney, the highest mountain in the USA outside of Alaska. The mountains were largely explored by the least documented of all explorers: the European shepherds, who probably created many of the trails used today by mountaineers. The High Sierra was an ideal terrain for sheep, thanks to its many meadows and relatively mild climate. One such shepherd was John Muir, originally from Scotland, a nomadic sawyer who had reached San Francisco in 1868, having traveled by steamship from Florida via Cuba and Panama. He settled in Yosemite for a few years and eventually became influential enough to convince the USA to create Yosemite and Sequoia National Parks in 1890, but never actually hiked what is today North America’s most famous trail, the John Muir Trail from Yosemite to Mt Whitney. The idea for that trail must be credited to Theodore Solomons, born and raised in San Francisco, who in 1892 set out to independently explore regions of the Sierra Nevada that no white man had seen before. The 1890s were the golden age of Sierra exploration.”

“San Francisco was still living with the legacy of the “Gold Rush” of 1849. The “Barbary Coast,” as the red-light district was known, was a haven for brothels and nightclubs. The thousands of Chinese immigrants who had been lured to California to build railways, mine gold and grow food had fathered a new generation that settled in “Chinatown,” the largest Chinese community outside Asia. The port served steamers bound for the coast or Asia as well as ferry traffic to the bay and the Sacramento River, and supported a large community of teamsters and longshoremen that also made it the most unionized city in the US.”

“Dubious characters still roamed the landscape: the career of despised media tycoon William Randolph Hearst got its start in 1887 when his father handed him the San Francisco Examiner. But there were also honest enterprising men, such as Amadeo Giannini, who founded the Bank of Italy in 1904 to serve the agricultural economy of the Santa Clara Valley (it was later renamed Bank of America). At the turn of the century one could already sense San Francisco’s predisposition towards rebellion: John Muir’s Sierra Club (formed in 1892) led the first environmental protest when the state planned a dam in Yosemite; the American Anti-Imperialist League (formed in 1898) organized the first anti-war movement when the US went to war against Spain (a war largely architected by Hearst” and the New York Times “to sell more copies of his newspapers); and the Union Labor Party (formed in 1901) became the first pseudo-socialist party to win a mayoral election in a US city. In 1871 Susan Mills and her husband Cyrus founded Mills College in Oakland, the first women’s college in the western states.”

“Most of this was irrelevant to the rest of the nation. San Francisco made the national news in 1906 because of the earthquake and fire that leveled most of it.”

“California was also blessed with some of the most reformist governors in the country, notably Hiram Johnson (1911-1917), who democratized California and reduced the power of the political barons, and William Stephens (1917-1923), who did something similar to curb the political power of unions. Their policies focused on raising the living standards of the middle class, and therefore of the suburbs.”

“Immigration made San Francisco a cosmopolitan city. There had already been Italians when California was still under Mexican rule. They were fishermen and farmers. By the turn of the century, old and new Italians had created an Italian quarter in North Beach. Then came the Japanese, who replaced the Chinese in agriculture. At the beginning of the century San Francisco boasted two Japanese-language newspapers: “The New World” and the “Japanese American.” Mexicans immigrated from 1910 to 1930, following the Mexican revolution and the construction of a railway.”

“San Francisco was also becoming friendly toward the arts. In 1902 the California Society of Artists was founded by a cosmopolitan group that included the Mexican-born painter Xavier Martinez and the Swiss-born painter and muralist Gottardo Piazzoni. At the California School of Design, many students were influenced by muralist and painter Arthur Mathews, one of the founders of the American Arts and Crafts Movement that tried to reconcile craftsmanship with industrial consumerism (a major national trend after the success of Boston’s 1897 American Arts and Crafts Exhibition). A symbolic event took place after the 1906 earthquake when Mathews opened his own shop (both as a craftsman and a painter) and started publishing one of the earliest art magazines in town, the Philopolis. Another by-product of the American Arts and Crafts Movement was Oakland’s California College of the Arts and Crafts founded in 1907 by one of the movement’s protagonists, Frederick Meyer.”

“More and more artists were moving to San Francisco. They created the equivalent of Paris’ Montmartre artistic quarter at the four-story building called “Montgomery Block” (also nicknamed “Monkey Block”), the epicenter of San Francisco’s bohemian life. Another art colony was born in the coastal city of Carmel, about two hours south of San Francisco. Armin Hansen opened his studio there in 1913. Percy Gray located there in 1922 and impressionist master William Merritt Chase taught there in 1914.”

“Architects were in high demand both because of the fortunes created by the railway and because of the reconstruction of San Francisco after the earthquake (for example, Willis Polk). Mary Colter studied in San Francisco before venturing into her vernacular architecture for the Southwest’s desert landscape. The Panama-Pacific International Exposition of 1915, held in San Francisco, for which Bernard Maybeck built the exquisite Palace of Fine Arts, symbolized the transformation that had taken place in the area: from emperors and gold diggers to inventors and investors (and, soon, defense contractors). A major sculptor was Ralph Stackpole, who in 1913 founded the California Society of Etchers and in 1915 provided sculptures for the Panama-Pacific International Exposition, notably the Palace of Varied Industries (demolished after the exposition). Influenced by the Panama-Pacific International Exposition, during the 1920s Maynard Dixon created an original Western style of painting. It was at the Panama-Pacific International Exposition that the painters of the “Society of Six” (August Gay, Bernard von Eichman, Maurice Logan, Louis Siegriest, and William Clapp) were seduced by impressionism. Last but not least, in 1921 Ansel Adams began to publish his photographs of Yosemite. It was another small contribution to changing the reputation of that part of California and the birth of one of the most vibrant schools of photography in the world. Literature, on the other hand, lagged behind, represented by Frank Pixley’s literary magazine the “Argonaut,” located at Montgomery Block.”

“Classical music was represented by its own school of iconoclasts. From 1912 to 1916, Charles Seeger taught unorthodox techniques such as dissonant counterpoint at UC Berkeley. Starting with “The Tides of Manaunaun” (1912), pianist Henry Cowell, a pupil of Seeger, began exploring the tone-cluster technique. That piece was based on poems by John Osborne Varian, the father of Russell and Sigurd Varian, who had moved to Halcyon, a utopian community founded in 1903 halfway between San Francisco and Los Angeles by the theosophists William Dower and Francia LaDue. Varian’s sons Russell and Sigurd later became friends with Ansel Adams through their mutual affiliation with the Sierra Club.”

Fair Use Sources:


Data Science - Big Data History

DNA Data Storage – 2012 AD

Return to Timeline of the History of Computers


DNA Data Storage

George Church (b. 1954), Yuan Gao (dates unavailable), Sriram Kosuri (dates unavailable), Mikhail Neiman (1905–1975)

“In 2012, George Church, Yuan Gao, and Sriram Kosuri, all with the Harvard Medical School’s Department of Genetics, announced that they had successfully stored 5.27 megabits of digitized information in strands of deoxyribonucleic acid (DNA), the biological molecule that is the carrier of genetic information. The stored information included a 53,400-word book, 11 JPEG images, and a JavaScript program. The following year, scientists at the European Bioinformatics Institute (EMBL-EBI) successfully stored and retrieved an even larger amount of data in DNA, including a 26-second audio clip of Martin Luther King’s “I Have a Dream” speech, 154 Shakespeare sonnets, the famous Watson and Crick paper on DNA structure, a picture of EMBL-EBI headquarters, and a document that described the methods the team used to accomplish the experiment.

Although first demonstrated in 2012, the concept of using DNA as a recording, storage, and retrieval mechanism goes back to 1964, when a physicist named Mikhail Neiman published the idea in the Soviet journal Radiotekhnika.

To accomplish this storage and retrieval, first a digital file represented as 1s and 0s is converted to the letters A, C, G, and T. These letters are the four chemical bases that make up DNA. The resulting long string of letters is then used to manufacture synthetic DNA molecules, with the sequence of the original bits corresponding to the sequence of nucleic acids. To decode the DNA and reconstitute the digital file, the DNA is put through a sequencing machine that translates the letters back into the original 1s and 0s of the original digital files. Those files can then be displayed on a screen, played through a speaker, or even run on a computer’s CPU.

In the future, DNA could allow digital archives to reliably store vast amounts of digitized data: a single gram of DNA has the potential to store 215 million gigabytes of data, allowing all the world’s information to be stored in a space the size of a couple of shipping containers.”

SEE ALSO Magnetic Tape Used for Computers (1951), DVD (1995)

To store information in DNA, a digital file represented as 1s and 0s is converted to the letters A, C, G, and T, the four chemical bases that make up DNA.

Fair Use Sources: B07C2NQSPV

History Software Engineering

Core Memory – 1951 AD

Return to Timeline of the History of Computers


Core Memory

An Wang (1920–1990), Jay Forrester (1918–2016)

“The first computers didn’t have rewritable memory. Instead, they were “hardwired” to read inputs, perform calculations, and output the results. But it soon became apparent that a rewritable main memory could be used to hold programs, making them easier to develop and debug, as well as data, making it possible for a computer to perform calculations on much larger problems.

Core memory worked by inducing a magnetic field into a tiny magnetic ring, or core. Each core was magnetized in the clockwise or counterclockwise direction, naturally allowing the core to store a single bit. Bits were stored by sending an electrical pulse along a pair of horizontal and vertical wires that crossed at a particular core, with one direction of magnetic flow storing a 0 and the other a 1. A third wire running through each core was used to read what had previously been stored. Core memory had the advantage of remembering its contents even when power was removed. The big disadvantage was that the cores had to be strung by hand into memory systems, which is why core memory was so expensive to produce.

Computer engineer An Wang came up with the basics of how to make core memory work while collaborating with Howard Aiken on the Harvard Mark IV computer; he filed for a patent on the invention in 1949. But Harvard lost interest in computing, so in 1951, Wang left and started his own company, Wang Laboratories. IBM bought the patent from Wang Laboratories for $500,000 in 1956.

Meanwhile at MIT, professor Jay Forrester saw an advertisement for a new magnetic material, realized it could be used to store data, and built a prototype system that stored 32 bits of data. At the time, MIT was building the Whirlwind computer to create the first computerized flight simulator. Whirlwind was designed to use an electrostatic memory system based on storage tubes, but MIT’s engineers hadn’t been able to get the tubes to work. Working with a graduate student, Forrester spent two years and created the first practical core memory system, storing 1,024 bits of data (1 kibibit) in an array of 32 × 32 cores. The memory was installed in Whirlwind in April 1951. That same year, Forrester filed for a patent on an even more efficient technique of arranging cores in three-dimensional arrays. IBM bought that patent from MIT for $13 million in 1964.”

SEE ALSO Delay Line Memory (1944), Williams Tube (1946), Whirlwind (1949), Dynamic RAM (1966)

“Core memory was developed for MIT’s groundbreaking Whirlwind computer in April 1951.”

Fair Use Source: B07C2NQSPV

History Software Engineering

Actual Bug Found – First “Debugging” – 1947 A.D.

Return to Timeline of the History of Computers


Actual Bug Found

Howard Aiken (1900–1973), William “Bill” Burke (dates unavailable), Grace Murray Hopper (1906–1992)

“Harvard professor Howard Aiken completed the Mark II computer in 1947 for the Naval Proving Ground in Dahlgren, Virginia. With 13,000 high-speed electromechanical relays, the Mark II processed 10-digit decimal numbers, performed floating-point operations, and read its instructions from punched paper tape. Today we still use the phrase “Harvard architecture” to describe computers that separately store their programs from their data, unlike the “von Neumann” machines that store code and data in the same memory.

But what makes the Mark II memorable is not the way it was built or its paper tape, but what happened on September 9, 1947. On that day at 10:00 a.m., the computer failed a test, producing the number 2.130476415 instead of 2.130676415. The operators ran another test at 11:00 a.m., and then another at 3:25 p.m. Finally, at 3:45 p.m., the computer’s operators, including William “Bill” Burke, traced the problem to a moth that was lodged inside Relay #70, Panel F. The operators carefully removed the bug and affixed it to the laboratory notebook, with the notation “First actual case of bug being found.”

Burke ended up following the computer to Dahlgren, where he worked for several years. One of the other operators was the charismatic pioneer Grace Murray Hopper, who had volunteered for the US Navy in 1943, joined the Harvard staff as a research fellow in 1946, and then moved to the Eckert-Mauchly Computer Corporation in 1949 as a senior mathematician, where she helped the company to develop high-level computer languages. Grace Hopper didn’t actually find the bug, but she told the story so well, and so many times, that many histories now erroneously credit her with the discovery. As for the word bug, it had been used to describe faults in machines as far back as 1875; according to the Oxford English Dictionary, in 1889, Thomas Edison told a journalist that he had stayed up two nights in a row discovering, and fixing, a bug in his phonograph.”

SEE ALSO COBOL Computer Language (1960)

“The moth found trapped between points at Relay #70, Panel F, of the Mark II Aiken Relay Calculator while it was being tested at Harvard University. The operators affixed the moth to the computer log with the entry “First actual case of bug being found.””

Fair Use Source: B07C2NQSPV

History Software Engineering

Binary-Coded Decimal – 1944 A.D.

Return to Timeline of the History of Computers


Binary-Coded Decimal

Howard Aiken (1900–1973)

“There are essentially three ways to represent numbers inside a digital computer. The most obvious is to use base 10, representing each of the numbers 0–9 with its own bit, wire, punch-card hole, or printed symbol (e.g., 0123456789). This mirrors the way people learn and perform arithmetic, but it’s extremely inefficient.

The most efficient way to represent numbers is to use pure binary notation: with binary, n bits represent 2n possible values. This means that 10 wires can represent any number from 0 to 1023 (210–1). Unfortunately, it’s complex to convert between decimal notation and binary.

The third alternative is called binary-coded decimal (BCD). Each decimal digit becomes a set of four binary digits, representing the numbers 1, 2, 4, and 8, and counting in sequence 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, and 1010. BCD is four times more efficient than base 10, yet it’s remarkably straightforward to convert between decimal numbers and BCD. Further, BCD has the profound advantage of allowing programs to exactly represent the numeric value 0.01—something that’s important when performing monetary computations.

Early computer pioneers experimented with all three systems. The ENIAC computer built in 1943 was a base 10 machine. At Harvard University, Howard Aiken designed the Mark 1 computer to use a modified form of BCD. And in Germany, Konrad Zuse’s Z1, Z2, Z3, and Z4 machines used binary floating-point arithmetic.

After World War II, IBM went on to design, build, and sell two distinct lines of computers: scientific machines that used binary numbers, and business computers that used BCD. Later, IBM introduced System/360, which used both methods. On modern computers, BCD is typically supported with software, rather than hardware.

In 1972, the US Supreme Court ruled that computer programs could not be patented. In Gottschalk v. Benson, the court ruled that converting binary-coded decimal numerals into pure binary was “merely a series of mathematical calculations or mental steps, and does not constitute a patentable ‘process’ within the meaning of the Patent Act.”

SEE ALSO Binary Arithmetic (1703), Floating-Point Numbers (1914), IBM System/360 (1964)

Howard Aiken inspects one of the four paper-tape readers of the Mark 1 computer.

Fair Use Source: B07C2NQSPV