According to Abbreviationfinder.org, CPU stands for Central Processing Unit. The term “central processing unit” is, broadly speaking, a description of a certain class of logic machines that can execute complex computer programs. This broad definition can easily be applied to many of the early computers that existed long before the term “CPU” was in wide use. However, the term itself and its acronym have been in use in the computer industry since at least the early 1960s. The shape, design, and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained fairly similar.
The first CPUs were custom designed as part of a larger computer, generally a one-of-a-kind computer. However, this costly method of customizing CPUs for a particular application has largely disappeared and has been replaced by the development of cheap, standardized processor classes tailored for one or many purposes. This trend of standardization generally began in the era of Discrete Transistors, Mainframes, and Microcomputers, and was accelerated rapidly with the popularization of the Integrated Circuit (IC), which has allowed more complex CPUs to be designed and manufactured in small spaces (in the order of millimeters). Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited applications of dedicated computing machines. Modern microprocessors appear in everything from Automobiles, Televisions, Refrigerators, Calculators, airplanes, to mobile or cell phones, Toys, among others.
Almost all CPUs deal with discrete states, and therefore require a certain class of switching elements to differentiate and change these states. Before the commercial acceptance of the Transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had different speed advantages over previous purely mechanical designs, they were unreliable for a number of reasons. For example, make circuits Sequential logic of Current direct required additional hardware to address the problem of the contact bounce.
On the other hand, while vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational and eventually fail and stop working altogether. Generally, when a tube has failed, the CPU would have to be diagnosed to locate the faulty component so it can be replaced. Therefore, the first electronic computers, (based on vacuum tubes), were generally faster but less reliable than electromechanical computers, (based on relays).
Tube computers, like the EDVAC, tended to average eight hours between failures, while relay computers, (older and slower), such as the Harvard Mark I, failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages produced generally outweighed the reliability issues. Most of these early synchronous CPUs ran at lower clock rates compared to modern microelectronic designs, (see below for a discussion of clock rates). The frequencies of the clock signal with a range from 100 kHz to 4 kHz were very common at this time. MHz, largely limited by the speed of the switching devices with which they were built.
Discrete IC and Transistor CPU
The complexity of CPU design increased as various technologies made it easier to build smaller, more reliable electronic devices. The first of those improvements came with the advent of the Transistor. Solid-state CPUs during the 1950s and 1960s did not have to be built with bulky, unreliable, and brittle switching elements like vacuum tubes and electrical relays. With this improvement, more complex and more reliable CPUs were built on one or more printed circuit boards containing discrete (individual) components.
During this period, a method of manufacturing many in a compact space gained popularity. The Integrated Circuit (IC) allowed a large number of transistors to be manufactured on a single Semiconductor- based wafer or “chip.” At first, only very basic, unspecialized digital circuits like NOR gates were miniaturized into ICs.
CPUs based on these “building block” ICs are generally referred to as “small-scale integration” (SSI) devices. SSI ICs, as used in the Computer guide Apollo (Apollo Guidance Computer), usually contained transistors that counts numbering in multiples of ten.
Building a complete CPU using SSI ICs required thousands of individual chips, but still consumed much less space and power than previous discrete transistor designs. As microelectronic technology advanced, an increasing number of transistors were placed in ICs, thus decreasing the number of individual ICs required for a complete CPU. Integrated circuits MSI and LSI (Medium and Large Scale Integration) increased the number of transistors to hundreds, and then thousands.
In 1964, IBM introduced its System / 360 computer architecture, which was used in a series of computers that could run the same programs with different speeds and performances. This was significant at a time when most electronic computers were incompatible with each other, even those made by the same manufacturer. To facilitate this improvement, IBM used the Microprogram concept, often called ” Microcode “, which still sees extensive use in modern CPUs.
The System / 360 architecture was so popular that it dominated the mainframe market for the next several decades and left a legacy that is still carried on by similar modern computers like the IBM zSeries. In the same year of 1964, Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. DEC would later introduce the extremely popular PDP-11 line, which was originally built with SSI ICs but was eventually implemented with LSI components as they became practical. In stark contrast to its predecessors made with SSI and MSI technology, the first LSI implementation of the PDP-11 contained a CPU comprised of only four LSI ICs.
Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, the transistors also allowed the CPU to operate at much higher speeds due to the short switching time of a transistor compared to a tube or relay. Thanks to both the increasing reliability and the dramatically increased speed of the switching elements, which at this time were almost exclusively transistors, CPU clock frequencies of tens of megahertz were obtained. Also, while discrete transistor CPUs and integrated circuits were in heavy use, new high-performance designs began to appear like Vector Processor | Vector Processors]] SIMD (Single Instruction Multiple Data). These early experimental designs later gave rise to the age of supercomputers specialized, as those made by Cray Inc.
Since the introduction of the first microprocessor, the Intel 4004, in 1970, and the first widely used microprocessor, the Intel 8080, in 1974, this class of CPUs has almost totally displaced the rest of the Central Processing Unit implementation methods. The mainframe and minicomputer manufacturers of that time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced microprocessors with instruction sets that were backward compatible with their older hardware and software. Combined with the advent and eventual vast success of the now ubiquitous personal computer, the term “CPU” is now applied almost exclusively to microprocessors.
Previous generations of CPUs were implemented as discrete components and numerous small-scale integrated circuits on one or more circuit boards. On the other hand, microprocessors are CPUs made with a very small number of ICs; usually just one. The smaller size of the CPU, as a result of being implemented on a single die means times faster switching due to physical factors such as decline of the capacitance parasitic on the doors. This has allowed synchronous microprocessors to have clock times ranging from tens of megahertz to several gigahertz. Additionally, as the ability to build excessively small transistors in an IC has increased, the complexity and number of transistors in a single CPU has also increased dramatically. This widely observed trend is described by Moore’s Law, which has been shown to date to be a fairly accurate prediction of the growth in complexity of CPUs and other ICs.
While the complexity, size, construction, and general shape of the CPU have changed dramatically in the past sixty years, it is remarkable that the design and basic operation have not changed much. Almost all common CPUs today can be accurately described as von Neumann stored program machines.
As the aforementioned Moore’s Law continues to hold true, concerns have been raised about the limits of integrated circuit transistor technology. The extreme miniaturization of electronic gates is causing the effects of phenomena that become much more significant, such as Electromigration, and the Subthreshold of loss. These newer concerns are among the many factors that cause researchers to study new computing methods such as the quantum computer, as well as expand the use of parallelism, and other methods that extend the usefulness of the classical von Neumann model.