PC buyers usually rely on the clock speed (megahertz) of a PC’s microprocessor to determine their purchasing decision. Because the industry lacks a simple, universally accepted way to judge performance, users have become conditioned to substituting clock speed to gauge how fast their applications will
This practice has grown common over many years because:
• The popularization of the PC among general consumers has increased the available pool of buyers unfamiliar with factors in PC performance.
• The growth of the direct model of PC purchases has made it more likely that the actual end user will buy a PC for himself or herself without the help of a third party familiar with factors that influence PC performance.
• The increasing sophistication of the PC exposes the buyer to a growing number of often arcane technical specifications, from which clock speed promises a convenient escape.
The clock speed of PC processors has reached over 3,000MHz.
This clock speed is 600 times more than the 5MHz of the first PC processors. Despite that advancement in clock speed, applications do not run 600 times faster. The fact that PC performance does not scale directly with clock speed indicates clock speed does not tell
PC buyers everything they need to know to gauge PC performance.
Since, IDC forecasts, the 135.5 million PCs sold in 2002 will represent an approximately $172 billion industry, an unreliable performance measure influencing such a staggering amount of revenue is a significant problem for both PC buyers and the PC industry.
At the start of the PC industry, PC buyers used clock speed to gauge performance because different PC processors were on par in efficiency. Part of IBM’s decision to use x86 processors for the original
IBM PC was to ensure multiple sources of compatible components.
In ensuring x86 compatibility, processor vendors developed processors that were similar in their internal designs, or architectures. As a result, clock speed became the distinguishing characteristic of the PC processor.
The situation changed, however, in the middle and late 1980s as the
PC began to serve a greater variety of tasks and applications. More than word processors, PCs became drawing tools, entertainment appliances, and communication devices. They also began going outside the office and into homes, on the road, and into the back office where only mainframes and mini-computers used to reside. From the increasingly varied uses and locales evolved a greater number of user profiles, or usage models, with special requirements for PCs to fulfill.
Accordingly, processor designs evolved, but not merely by scaling the clock speed. In October 1985, for example, Intel introduced the
80386 processor, which doubled the amount of transistors in the prior generation’s 80286 processor and introduced 32-bit computing to the PC. These advances were far more beneficial to PC performance than the advance from the 80286’s 12MHz to the 80386’s
16MHz.Advances that took place in subsequent designs made the processor’s job easier by integrating small memory caches that stored the data closer to the processor core. Designers also improved efficiency by integrating other components, such as floating point units, and introduced more efficient techniques of data processing, including adding more processing lines (pipelines) and processing data and instructions to run the critical tasks first. Designers also found ways to make processors work better in the context of the entire system by introducing new instructions (e.g., MMX and 3DNow! Professional for multimedia) that were optimized for richer data processing and by giving the processor a faster front-side bus to the rest of the system.
Due to the different requirements of specific form factors and segments, different processors from even the same vendor began to deliver varied performance at the same clock rate. The issue became more apparent when vendors like AMD moved to develop their own architectures that, while still compatible with the x86 instruction set, took different approaches to maximize performance.
These approaches included changes that impacted both the processor and the system (e.g., improved front-side buses).
Why haven’t we replaced clock speed yet?
IDC believes that, despite the increasingly visible awareness within the industry of the inadequacies of clock speed, PC buyers continue to use it because there are no adequate industrywide metrics available. They like clock speed because it is simple —the perception is that a higher number means better performance —and universally understood. For its part, the PC industry struggles against the sheer difficulty of replacing such an entrenched measurement. One attempt to supply a new measurement, the PR Rating introduced in 1996, failed because it never achieved widespread acceptance. At the same time it lost credibility with PC buyers because it was confusing and didn’t reflect individual usage models, it also lost credibility in the industry because each vendor assigned the rating itself without third-party verification and without full disclosure about the details of the underlying tests. As a result, the PR Rating never had the weight of the entire industry behind it.
However, we also believe that new forces of change are emerging.
- Increasing recognition that performance does not scale directly with clock speed
- Mounting disparity of underlying architectures and clock speeds of processors from the same vendor and from different vendors — that defy easy comparison
- Acknowledgement that other components, such as graphics processors and memory, could have as much impact on overall PC performance as processor speed
- Rise of other factors in the overall PC purchase decision, such as cost, features, upgradeability, portability, battery life, and connectivity
- Within a given architecture, scaling clock speed alone will reach diminishing returns. Also, measures to counteract transistor current leakage, energy consumption, and heat output are increasing the system cost.
- Processors comparable in performance to processors with higher clock speeds not valued equally.
In part two, IDC and AMD examine how processor vendors increasingly reflect the need for a more balanced approach in the way they convey performance.