SOC Design

Monday, July 11, 2005

Rumors of RISC's demise somewhat premature

This week in EE Times, Ron Wilson writes about a keynote given by MIPS CEO John Bourgoin lamenting the demise of RISC processors (MIPS' main product). When the idea of RISC's clock-per-instruction concept started in the 1980s at IBM, processor clock cycles and memory access times were near parity so the RISC concept made sense for minicomputer and mainframe processor design. RISC processor design eschews complex, multiclock processor instructions and essentially replaces microcode ROM with single-cycle instruction RAM caches and optimizing compilers.

Even the most successful CISC instruction set ever invented, that of the 8086 microprocessor and its descendants, became "RISC-ified" in the 1990s. Inside modern x86 processors, an instruction chopper/shredder (see the end of the movie "Galaxy Quest" for a vivid visualization of this device) finely slices the CISC x86 instructions into simpler operations that are then distributed to one or more RISC engines hidden deep inside the machine.

Today's problem with the RISC concept, which Wilson addresses, is that memory access time is now much slower than today's processor clock cycle times, at least for DRAMs. The result is a heavy reliance on increasingly large SRAM caches that continue to keep up with processor clock rates, barely.

However, this situation is not strictly the fault of RISC's pipelined one-instruction/clock approach. The problem is caused by another RISC fault, the reduction in the number and complexity of instructions to a basic set of less than 100 instructions. Compilers can indeed create instruction streams that perform complex tasks from these simple instructions, but it takes a lot of instructions to do so. If complex programs require many instructions to function, then they need larger caches and higher clock rates to meet performance goals. Larger caches and higher clock rates ultimately increase product cost.

Enter post-RISC configurable processors. With such processors, design teams can add specialized, task-specific instructions to the processor that function like CISC instructions (by doing complex things) but adhere to RISC's pipelined, one-instruction/clock approach. These processors work well as deeply embedded task engines inside of SOCs where task specificity is easy to define and appropriate to use. In such applications, programs are typically small and do not require large caches. In addition, specialized instructions reduce the number of instructions needed to perform the target tasks, which relieves the pressure to constantly boost clock rate.

In short, RISC (like the dinosaurs) isn't dead, it has evolved.

1 Comments:

  • I'd like to see Bourgoin's original keynote. Does he really think the problem with RISC is it cannot handle memory latency? Load/store architectures deal with memory latency better than the memory to memory instructions of previous processors.

    By Blogger Nuth, at 10:53 AM  

Post a Comment

<< Home