Appendix 3

 

 

A Brief History of Computers

 

 

The Abacus, the Pascaline, and the Arithmomètre

 

The roots of modern computers are much deeper than those of microelectronics and go back 4,600 years to the abacus, the first known arithmetic calculation tool. The next innovation was the Pascaline, invented by Blaise Pascal in 1642. It was a mechanical calculator that could only perform additions and subtractions and therefore its utility was rather limited.

In 1673, Gottfried Wilhelm von Leibniz devised a conceptual method to mechanically make multiplications and divisions, but the technology of his time was not precise enough to manufacture such a device. It took another 178 years before Leibniz’s method could be realized, giving birth to the first four-function mechanical calculator. This calculator was invented by the Frenchman Thomas de Colmar and was first commercialized in 1851 with the name Arithmomètre, exactly 100 years before Univac I, the first commercial electronic computer, was sold.

The Arithmomètre was a robust and practical solution to be used routinely. It launched an industry that lasted until the 1970s when electronic calculators based on microchips quickly replaced electromechanical calculators.

 

 

The Jacquard loom and the electromechanical tabulator

 

The first known programmable machine was the Jacquard loom, invented in 1801 by Frenchman Joseph Marie Jacquard to automatically weave fabrics with complex designs. The loom was controlled by punched cards that memorized the program by directing the step-by-step operation of the loom.

This was not only a successful invention, it was also the first example of an automatic system controlled by a program, i.e., by a procedure that could be changed without changing the machine itself. The programmable loom was indeed the real precursor of modern computers, even though at first blush it appears to have little in common with computers.

The idea of using punched cards was later used by Herman Hollerith to produce an electromechanical tabulator able to quickly format data. This device was used for the 1890 census data in the United States, demonstrating major improvements over manual tabulation. The company founded by Hollerith was called the Tabulating Machine Company and later became IBM.

 

 

The relay

 

 

Another important thread intertwined in the history of modern computers is associated with automatic telephone exchanges based on relay switches. The relay was invented in 1835 by Joseph Henry and was used as an electrically operated switch. In the period between 1935 and 1937, Victor Shestakov in Russia and Claude Shannon at Bell Labs in the United States, independently discovered that Boolean logic, a binary logic invented in 1840 by George Boole, was the perfect mathematical formalism to describe switching systems and relay-based machines.

 

 

The Turing machine

 

 

In an unrelated development, Alan Turing in England invented the Turing machine in 1936. It was part of a mental experiment that allowed to falsify Hilbert’s decision problem, a problem famously posed by the mathematician David Hilbert in 1928. The universal Turing machine provided an abstract model describing a class of universal machines capable of executing any general algorithm, thus giving rise to information science, another fundamental branch of the information revolution.

 

 

The first electromechanical programmable computer: Zuse’s Z3

 

 

All these strands were combined in 1941 with the design and construction of the first fully functioning Turing-complete electromechanical computer: the Z3, designed and built by Konrad Zuse in Germany. This machine used a binary floating-point architecture based on 22-bit words and a CPU made with about 2,300 relays—as many relays as there were transistors in the 4004!

The memory for the data and the program was ingeniously implemented using a 35 mm punched film, the same film then used in film cameras, naturally without the emulsion. The clock frequency was about five Hz. To put that in perspective, today microchips can have clock frequencies exceeding five GHz, one billion times faster.

The Z3 led us to the threshold of the electronic computer era, in 1943, with a secret project funded by the US Army to develop a computer that could quickly calculate ballistic trajectories. The key idea was to replace the relay with a vacuum tube that, acting as a switch, could turn electric current on and off from 10,000 to 100,000 times faster than a relay. This was not obvious in 1943 because vacuum tubes were generally used as signal amplifiers and not as switches.

 

 

ENIAC, the first electronic computer

 

 

The result was ENIAC, the first fully functioning electronic computer, designed and built by John Mauchly and J. Presper Eckert and completed in 1946. ENIAC had a 200-microsecond instruction cycle with the program provided by mechanical plugs and switches, a rudimentary and laborious method. It employed 17,468 vacuum tubes, occupied an area of 167 m2, dissipated 150 kW of power, and weighed 30 tons. The average time between failures was a few hours, due to the poor reliability of the vacuum tubes. However, ENIAC was much faster than the Z3, although conceptually inferior, because it could not yet memorize a program.

The first electronic computer to have all the essential characteristics of modern computers was the EDSAC, the first computer with a stored program. It was developed at the University of Cambridge by Maurice Wilkes, with the collaboration of the famous mathematician John von Neumann, to whom credit is given for using the same memory to store both data and programs. The EDSAC was completed in 1949 and it had a serial memory with 1,024 words of 17 bits each, made by using a mercury delay line. Random-access memory (RAM) based on magnetic cores had not yet been invented.

 

 

UNIVAC 1, the first commercial electronic computer

 

 

All the first computers were research machines of which only one exemplar was ever built. It took a few more years before the introduction of the first commercial electronic computer, the UNIVAC I. The UNIVAC I was a stored-program computer with a 12-bit serial main memory of 1,024 words (1.5 KB). It used for the first time a magnetic tape as a secondary memory to increase the overall storage.

The UNIVAC I used 5,200 vacuum tubes that dissipated 125 kW. It could perform 500 multiplications per second. At a cost of more than one million dollars per unit, 46 units were sold, demonstrating for the first time the existence of a market for computers. Twenty years later, a computer made with the Intel 4004 microprocessor had similar performance as the UNIVAC I in a single 25 x 25 cm2 printed circuit board, dissipating 10 W and costing a few hundred dollars. Ten years later, that single-board computer could be integrated into a single silicon chip with less than 1 W of power dissipation, cost less than $10, and run more than ten times faster than the Univac I.

All early computers used vacuum tubes until 1957, when the Philco Transac S-2000, the first commercial transistorized computer was introduced. Two years later, the transistorized IBM 7090 and the Olivetti Elea 9003 were also introduced. From that point on, all new computer models used transistors. With transistors, the physical dimensions of computers, their power dissipation, speed, and especially their reliability, were drastically improved.

 

 

The minicomputer

 

 

In the 1960s, thanks to the progress of microelectronics—the move from germanium to faster silicon transistors and the commercialization of the integrated circuits—computers evolved rapidly with powerful supercomputers at the high-end and minicomputers at the low-end. For example, in 1963 the SAGE system began operation. Designed by IBM in collaboration with the US Air Force to coordinate the tasks of 24 radar stations in North America, SAGE became the largest computer ever built (floor area: 2000 m2, weight: 275 tons, energy consumption: 3 MW). It was also the world's first real-time computer network.

In 1964 IBM introduced System/360, a large family of compatible and scalable computers, capable of covering a wide range of applications. It was equipped with the first sophisticated operating system and enjoyed great market success. In the same year, Control Data Corporation introduced the CDC 6600, the world's first supercomputer. The CDC 6600 was designed by Seymour Cray who later founded Cray Computers, the leading supercomputer company for many years. The CDC 6600 cost more than $8 million and was 10 times faster than its nearest rival.

In 1965, the use of integrated circuits allowed for reducing the dimensions of computers to those of a small piece of furniture, giving life to the minicomputer. The minicomputer was a smaller version of a mainframe computer, intended for applications where mainframes could not be used on account of their bulk and cost.

Introduced by Digital Equipment Corporation (DEC) with the PDP-8 model, minicomputers opened new application areas for computers, especially in control and telecommunications systems, further expanding the reach and impact of computers.

 

 

The versatility of computers

 

 

The computer is a universal symbol manipulator and its versatility is due to its programmability. Therefore, the hardware is necessary but not sufficient to solve any specific problem. The other essential ingredient is the software, the program that makes the hardware-software combination capable of performing the desired function. With the availability of increasingly powerful computers, i.e., computers with more memory and more execution speed, new application areas became possible. And as computers became smaller, faster, less power-hungry, and less expensive, the number of applications increased exponentially. In parallel with the hardware development, the creation of the software led to the rise of an independent software industry.

Among the early computer applications, some appeared incongruous with what was expected from a computer, with reactions like: "But what does voice recognition have to do with computers? Computers should only calculate." Computers can indeed solve many problems that at first sight may appear outside their domain. The art of programming is based on the ability to conceptualize how to break down a problem or a process into a series of algorithms, i.e., a sequence of automatic procedures that a computer can perform efficiently.

The importance of programming has increased over time and, where the cost of the hardware has steadily decreased, the cost of software has steadily increased to the point of becoming the dominant cost of information processing today. The software is like the mind and the hardware is like the body. Both are needed, and must work in perfect collaboration to solve any problem.

In recent years it has finally become possible to create computers that learn almost by themselves—instead of being programmed by explicit algorithms—with the help of artificial neural networks, structures that imitate the essential information processing performed by the biological neural networks we have in our nervous system. Neural networks find the implicit rules (statistical correlations) contained in the training data through a bottom-up learning process, achieving far better results for pattern recognition than had been possible with traditional top-down programming.

The information revolution that is transforming society has brought to the fore the profound and completely unsuspected relationship between the nature of information and the nature of reality. This is a particularly fascinating topic that was unknown to physicists and philosophers until the 1950s, a subject that will profoundly change our understanding of the nature of reality, the nature of life, and the nature of consciousness.

 

 

The mother of the computer

 

While there are many fathers of the computer, there is only one sure mother: Ada Byron Lovelace (1815-1852). Long before the computer was invented, she was the first to envision the modern computer. In her interesting notes to the text of Luigi Federico Menabrea "Notions sur la machine analytique de Charles Babbage," which she translated into English, Lovelace explains her idea of a "mechanism capable of combining general symbols, in unlimited sequences for variety and extension." In the notes, she presented an algorithm for the Babbage analytic machine to generate Bernoulli numbers. This is the first example of software programming in history.

Lovelace also anticipated the ability of machines to learn and develop increasingly complex solutions, but she decidedly declined the possibility of creating conscious machines. It was quite clear to her that "the analytical machine has no pretension to conceive anything."