SOC Design

Wednesday, December 14, 2005

SOC Design: Just what do you optimize?

A recent article and an unrelated analyst presentation give excellent advice to SOC designers and managers comtemplating the plunge below 100nm. The rules of system design below this lithography threshold change and the article and the presentation provide some partial roadmaps to success.

EE Time's EDA editor Richard Goering wrote a recent column on Design for Inefficiency that questions how SOC design teams trade off transistor budgets for time to market. Sound like heresy? I remind you, oh gentle reader, that precisely the same discussions about using C for embedded systems software were occurring 20 years ago. If you haven't heard, the relatively inefficient C language won over efficient assembly code precisely because of time-to-market issues. Most of today's systems would never get to market if they were solely or even largely based on software written in assembly language.

Last week at Gartner's Semiconductor Industry Briefing held at the Doubletree in San Jose, Research VP and Chief Analyst Bryan Lewis discussed "second-generation SOCs" in his presentation titled Charting the Course for Second-Generation SOC Desvices, in which he described second-generation SOCs as high-gate-count devices using mixed process technologies, multiple processors, and multiple software layers. In Lewis' vision of a second-generation SOC, the multifunctional chip is built with multiple processor cores, each driving its own subsystem with its own operating system and application firmware. This design approach is unlike today's most common design approach of loading up one main processor with as many tasks as possible, and then some.

Lewis' second-generation vision encompasses a divide-and-conquer approach to complex system design and it closely relates to Goering's theme of asking, "Just what do you optimize?" The more you burden one processor with an increasing number of tasks, the more complex the software gets and the faster the processor must run. The result: exponentially increasing software complexity (think lost time to market and bug-riddled code) and exponentially incresing power dissipation and energy consumption (think less battery life or more expensive power supplies; noisy, expensive, and relatively unreliable cooling fans; and larger, more costly product enclosures).

Once again, the question of the decade is: "What do you optimize?" Do you optimize transistor count to absolutely minimize chip cost while greatly increasing design time and cost and possibly missing market windows, or do you waste truly cheap transistors to buy back some of that time?

I think the answer's pretty clear: 90nm and 65nm transistors are cheap and engineering time is expensive. Lost time to market is virtually priceless. What do you think?


Post a Comment

<< Home