CS 473 - Introduction
History of RISC and CISC
``Semantic Gap'' drove computer architecture for a long time. The
idea with this notion was that a gap exists in the level of
abstraction between the various levels (eg raw hardware,
programmer-visible instruction set, high level language) of a computer
system, which is obviously true, and that this is a bad thing, which
seemed obviously true as well at the time.
- 1970s: make the instruction set as high-level as possible.
- Examples
- Basic Notions
- Complex instructions mimicking high level language statements
- Complex addressing modes mimicking high level language operations
- Variable length instructions reducing code size
- Advantages
- Denser code
- Fewer instructions required to execute program => faster
- Disadvantages
- Hard to write compilers to take advantage of features (surprise!)
- Hardware complexity => bugs
- Hardware complexity => longer data paths, slower
- Hardware complexity => slower design cycle
- 1980s: make the instruction set as low-level as possible, exposing hardware
- Basic Notions
- Simple instructions designed so stereotyped sequences could
mimic CISC instructions
- Limited addressing modes, again permitting instruction
sequences to mimic those of CICS machines
- Same-size instructions facilitating pipelining
- Many registers, putting most local variables in registers
instead of memory.
- Load-Store architecture
- Examples
- Advantages
- Easier to write compilers
- Reduced hardware complexity => fewer bugs, faster development, shorter
data paths
- Disadvantages
- Less dense code
- More instructions => program runs slower
- Ties instruction set to particular implementation
- 1990s: Try to find a middle road
- RISC-Like Simplicity - load/store, fixed-size instructions,
restricted addressing modes
- CISC-Like Specialized Instructions (but not bizarre
instructions)
- Attempt at clean HW interface with little ``hidden state''
- Examples
- IBM notion of ``Reduced Instruction Set Cycles''
Question: how long does it take to run a program?
Time = (Clock Cycles)/(Clock Rate)
Clock Cycles = (Number of Instructions) * (Cycles Per Instruction)
-
Current: Explicit Support for Instruction-Level Parallelism (Itanium)
So: both of the earlier extremes were fallacies. CISC attempts to
optimize Number of Instructions, at the expense of CPI and clock rate. RISC
attempts to optimize clock rate, at the
expense of Number of Instructions. More current approaches attempt to
optimize time, willing to trade off all the other parameters to do
it.
Note on DEC view of CPI: when the Alpha was introduced, they
considered the VAX's lifespan (~25 years), and saw a factor of 1000
improvement in that time. Assumption: there will be another factor
of 1000 performance in next 25 years. How?
- Clock rate: only see a factor of ten there
- Multiprocessors: see another factor of ten there
- Instruction-level parallelism: last factor of ten. They want to
execute 10 instructions/cycle!