If it’s broke, fix it
Bad news for SOC designers as reported in EETimes last week in an article written by David Lammers quoting Cadence’s Senior VP of R&D Ted Vucuverich. While the bulk of the article covered the industry trend toward adoption of 65nm design rules sooner rather than later (more on that topic in a later blog), the article’s last paragraph discussed the dismal state of SOC design success today:
"The industry has been weighed down by relatively poor first-time design success rates, he said, quoting data from analysis firm Collett and Associates. In 2003, only about one-third of the 130-micron designs achieved first-time success. After the third iteration, only 60 percent of the designs worked, he said, attributing hard-to-detect in the designs for the low rate of improvement. After three failures, many designs afflicted with "really hard problems" are declared disasters and abandoned altogether, he said."
A 66% initial failure rate is troubling and says something about the EDA industry’s current inability to support designs of deep-submicron complexity. Not to sound like a broken record here, but the current approach to system design, which is based on techniques developed more than 10 years ago, is now well and truly broken. The statistics prove it! This trend will only worsen as 90nm and then 65nm design rules become more common.
I believe that the solution to this design problem is to engineer systems at much higher abstraction levels. That means that engineers need to spend much less time hacking RTL, far less time verifying new blocks of custom logic, and much more time thinking about, tinkering with, and simulating systems at the block-diagram level. To do this, design teams must use building blocks larger than gates, flip-flops, registers, and ALUs. They must also cease and desist from manually translating algorithms from high-level languages into hardware-description languages.
Moore’s Law isn’t dead. The International Technology Roadmap for Semiconductors (ITRS) has codified this law and ensures that we will have more transistors per chip every year. Our system-design styles must now use those transistors far more effectively to overcome the barriers to complex system design and to substitute what’s in surplus (transistors) for what’s scarce (engineering time and project cycle time).
I work for a configurable microprocessor core vendor so it’s no secret that I think processor cores are part of the solution. They are pre-verified, correct-by-construction blocks of RTL that need relatively little verification. Processor cores can run software directly, eliminating manual translation of C or C++ to RTL. Configurable cores can run HLL programs at speeds approaching those of hand-built RTL blocks but they’re far easier to design into a system, in large numbers. Processor cores and memories are clearly part of the solution to the high failure rate of today’s SOC designs.
"The industry has been weighed down by relatively poor first-time design success rates, he said, quoting data from analysis firm Collett and Associates. In 2003, only about one-third of the 130-micron designs achieved first-time success. After the third iteration, only 60 percent of the designs worked, he said, attributing hard-to-detect in the designs for the low rate of improvement. After three failures, many designs afflicted with "really hard problems" are declared disasters and abandoned altogether, he said."
A 66% initial failure rate is troubling and says something about the EDA industry’s current inability to support designs of deep-submicron complexity. Not to sound like a broken record here, but the current approach to system design, which is based on techniques developed more than 10 years ago, is now well and truly broken. The statistics prove it! This trend will only worsen as 90nm and then 65nm design rules become more common.
I believe that the solution to this design problem is to engineer systems at much higher abstraction levels. That means that engineers need to spend much less time hacking RTL, far less time verifying new blocks of custom logic, and much more time thinking about, tinkering with, and simulating systems at the block-diagram level. To do this, design teams must use building blocks larger than gates, flip-flops, registers, and ALUs. They must also cease and desist from manually translating algorithms from high-level languages into hardware-description languages.
Moore’s Law isn’t dead. The International Technology Roadmap for Semiconductors (ITRS) has codified this law and ensures that we will have more transistors per chip every year. Our system-design styles must now use those transistors far more effectively to overcome the barriers to complex system design and to substitute what’s in surplus (transistors) for what’s scarce (engineering time and project cycle time).
I work for a configurable microprocessor core vendor so it’s no secret that I think processor cores are part of the solution. They are pre-verified, correct-by-construction blocks of RTL that need relatively little verification. Processor cores can run software directly, eliminating manual translation of C or C++ to RTL. Configurable cores can run HLL programs at speeds approaching those of hand-built RTL blocks but they’re far easier to design into a system, in large numbers. Processor cores and memories are clearly part of the solution to the high failure rate of today’s SOC designs.
0 Comments:
Post a Comment
<< Home