Instruction-Level Parallel Processing

See allHide authors and affiliations

Science  13 Sep 1991:
Vol. 253, Issue 5025, pp. 1233-1241
DOI: 10.1126/science.253.5025.1233


The performance of microprocessors has increased steadily over thepast 20 years at a rate of about 50% per year. This is the cumulative result of architectural improvements as well as increases in circuit speed. Moreover, this improvement has been obtained in a transparent fashion, that is, without requiring programmers to rethink their algorithms and programs, thereby enabling the tremendous proliferation of computers that we see today. To continue this performance growth, microprocessor designers have incorporated instruction-level parallelism (ILP) into new designs. ILP utilizes the parallel execution ofthe lowest level computer operations—adds, multiplies, loads, and so on—to increase performance transparently. The use of ILP promises to make possible, within the next few years, microprocessors whose performance is many times that of a CRAY-IS. This article provides an overview of ILP, with an emphasis on ILP architectures—superscalar, VLIW, and dataflow processors—and the compiler techniques necessary to make ILP work well.