Categories
History Software Engineering

Text-to-Speech (TTS) – 1984 AD

Return to Timeline of the History of Computers

1984

Text-to-Speech

Dennis H. Klatt (1938–1988)

Text-to-speech (TTS) systems are computers that read typewritten text and then speak it out loud. The first English text-to-speech system originated in Japan in 1968. But it was DECtalk, a standalone appliance for turning text to speech, that commoditized this technology. The invention helped many people, including those unable to talk due to medical reasons or disabilities. While a lot of basic research fails to transition into practical applications, DECtalk was a success story.

Much of DECtalk’s core capability was built on the text-to-speech algorithms developed by Dennis Klatt, who had joined MIT as an assistant professor in 1965. Packaged into a hardware appliance by the Digital Equipment Corporation, DECtalk had asynchronous serial ports that could connect to virtually any computer with an RS-232 interface. DECtalk was kind of like a printer, but for voice! Two telephone jacks let users hook DECtalk up to a telephone line, allowing DECtalk to make and receive calls, speak to the person at the other end of the phone line, and decode the touch tones of their responses.

TTS systems such as DECtalk work by first converting text to phonemic symbols and then converting the phonemic symbols to analog waveforms that can be heard by humans as sound.

When it launched, DECtalk’s price tag was approximately $4,000, and it came equipped with a variety of different speaking voices. Over time, the number of voices expanded, with names such as Perfect Paul, Beautiful Betty, Huge Harry, Frail Frank, Kit the Kid, Rough Rita, Uppity Ursula, Doctor Dennis, and Whispering Windy. An early user of the DECtalk algorithm is the world-famous British physicist Stephen Hawking, who lived with amyotrophic lateral sclerosis, or ALS. Unable to talk, his voice is best recognized as “Perfect Paul.” The National Weather Service also used DECtalk for its NOAA Weather Radio broadcasts.

SEE ALSO Electronic Speech Synthesis (1928)

After he was unable to speak because of the progress of a degenerative nerve disease, physicist Stephen Hawking used a text-to-speech device as his voice.

Fair Use Source: B07C2NQSPV

Categories
Artificial Intelligence History

Electronic Speech Synthesis – 1928 A.D.

Return to Timeline of the History of Computers

1928

Electronic Speech Synthesis

Homer Dudley (1896–1980)

“Long before Siri®, Alexa, Cortana, and other synthetic voices were reading emails, telling people the time, and giving driving directions, research scientists were exploring approaches to make a person’s voice take up less bandwidth as it moved through the phone system.

In 1928, Homer Dudley, an engineer at Bell Telephone Labs, developed the vocoder, a process to compress the size of human speech into intelligible electronic transmissions and create synthetic speech from scratch at the other end by imitating the sounds of the human vocal cord. The vocoder analyzes real speech and reassembles it as a simplified electronic impression of the original waveform. To recreate the sound of human speech, it uses sound from an oscillator, a gas discharge tube (for the hissing sounds), filters, and other components.

In 1939, the renamed Bell Labs unveiled the speech synthesizer at the New York World’s Fair. Called the Voder, it was manually operated by a human, who used a series of keys and foot pedals to generate the hisses, tones, and buzzes, forming vowels, consonants, and ultimately recognizable speech.

The vocoder followed a different path of technology development than the Voder. In 1939, with war having already broken out in Europe, Bell Labs and the US government became increasingly interested in developing some kind of secure voice communication. After additional research, the vocoder was modified and used in World War II as the encoder component of a highly sensitive secure voice system called SIGSALY that Winston Churchill used to speak with Franklin Roosevelt.

Then, taking a sharp turn in the 1960s, the vocoder made the leap into music and pop culture. It was and continues to be used for a variety of sounds, including electronic melodies and talking robots, as well as voice-distortion effects in traditional music. In 1961, the first computer to sing was the International Business Machines Corporation (IBM®) 7094, using a vocoder to warble the tune “Daisy Bell.” (This was the same tune that would be used seven years later by the HAL 9000 computer in Stanley Kubrick’s 2001: A Space Odyssey.) In 1995, 2Pac, Dr. Dre, and Roger Troutman used a vocoder to distort their voices in the song “California Love,” and in 1998 the Beastie Boys used a vocoded vocal in their song “Intergalactic.””

SEE ALSO “As We May Think” (1945), HAL 9000 Computer (1968)

“The Voder, exhibited by Bell Telephone at the New York World’s Fair.”

Fair Use Source: B07C2NQSPV