History Software Engineering

Connection Machine – Parallelization – 1985 AD

Return to Timeline of the History of Computers


Connection Machine

Danny Hillis (b. 1956)

Processing more data faster is one of the fundamental challenges in computing. From the 1940s until the 1990s, most improvements came from faster components, clock cycles, and storage systems. But another approach to processing more data faster is to break up the problem into many smaller problems and process in parallel, with a fleet of machines.

Parallelization is possible because many computing problems are similar to knitting sweaters. If you needed four sweaters in a month, you could hire the world’s fastest knitter to churn them out, knitting one each week. Or you could hire 12 fast knitters to work simultaneously on the sleeves and the bodies and then join them all together on Friday. And if you needed 10,000 sweaters? In that case, you could hire 50,000 knitters of average skill: they don’t have to be fast, and it doesn’t really matter if some of them fail, provided there is a clever approach for properly organizing the effort.

This is the kind of parallelism behind Thinking Machines. Based on a PhD thesis by its cofounder Danny Hillis at MIT, the company’s first supercomputer combined 65,536 puny 1-bit microprocessors, each “connected” through a massively parallel network.

Called the CM-1, it proved difficult to program, because few programmers could visualize algorithms that run efficiently on massively connected 1-bit processors. In 1991, the company released the CM-5, which used 1,024 standard 32-bit microprocessors. Programming was easier, and with so many processors, the CM-5 was one of the fastest computers in existence.

Although the idea of combining thousands of computers together in a single machine worked well in the 1990s, a decade later the dominant approach for solving big problems was grid computing—connecting thousands (or millions) of conventional computer systems over Ethernet with specialized software. Using conventional systems that were not very fast and subject to failure was the electronic equivalent of hiring “average knitters” in the sweater example. Instead of designing complicated parallelizing hardware, like the CM series, the challenge shifted to designing clever software.

SEE ALSO Hadoop Makes Big Data Possible (2006)

Removing a panel of the CM-2, exposing one of the “subcubes” containing 16 printed circuit boards, each containing 32 central processing chips, each chip containing 16 processors, for a total of 8,192 processors in the sub-cube, or 65,536 processors in total.

Fair Use Source: B07C2NQSPV