SOC Design

Thursday, June 12, 2008

This blog has moved

A year ago, I moved this blog to to get a wider audience and to sidestep blogspam. See you there!


Tuesday, February 20, 2007

Quantum Conundrum

Last week, I attended the debut of what may become the first commercial quantum computer. Or not. The EDN article I wrote about this demonstration is here:

D-Wave, the company that has been developing this quantum hardware for eight years, used the Computer History Museum for it's introduction venue. However, the computer itself was located in Burnaby, British Columbia and was operated via the Internet. So we have to take D-Wave's word that we were watching an actual quantum computer solve problems. That's not to say I disbelieve D-Wave, only that I cannot say with 100% confidence that I indeed saw a quantum computer in action.

Other press outlets have published quotes that scientists are "dubious" about D-Wave's claims. I think that's the wrong word. D-Wave hasn't been forthcoming about key technical details (but says they will be in the future) so I'd say that the community is presently "unconvinced." We'd like more information before passing judgement. In the meanwhile, I consider quantum computing to be "spooky information processing at a distance," to paraphrase Einstein.

D-Wave's Orion, the name of their proof-of-concept machine, solve's NP-complete problems. These are the sort of problems that require a full solution search in conventional computers, which is a very slow process for problems with large solution sets. As it is today, Orion is about 100x slower than today's computers because it's only a 16-qubit (quantum bit) machine. By the end of 2008, D-Wave believes it can have a 1024-qubit machine running that would be 10x faster than conventional binary computers at solving NP-complete problems.


Saturday, January 07, 2006

CES Update: The Revolution Will be Televised

The revolution will not be televised, will not be televised,
will not be televised, will not be televised.
The revolution will be no re-run brothers;
The revolution will be live.
- Gil Scott-Heron

I have just returned from the 2006 CES in Las Vegas. It was packed with people. As the volume driver in the electronics industry has switched from personal computers to consumer electronics, CES has taken the mantle as the industry's leading light into the murky future from Comdex. The mantle had previously passed from the National Computer Conference (NCC) to Comdex around 1981 when PCs started to dwarf mainframes in market dominance and NCC refused to heed the change.

The big news at Comdex, er CES, this year was a 100-year-old idea called television. New-millennium television is becoming a when-you-want-it, where-you-want-it, how-you-want-it affair. Dick Tracy had this capability in his wristwatch exactly 60 years ago. Now it seems that it's time for everyone else to have it too.

The "when you want it" phase started with VCRs in the 1970s and it has evolved into today's DVD recorders and PVRs (personal video recorders). However, all of these devices are tethered to coaxial cables tied to stuck-in-the-wall cable sockets and immobile satellite dish antennas. Also, these consumer products are only time-shifting devices; they don't jimmy with the image format and resolution. How-you-want-it and where-you-want-it boxes such as Apple's video iPod and other personal media players are just starting to appear.

As CES 2006 demonstrated, the industry is full of companies working on place-shifting and format-shifting video products. Two new classes of video place shifters I saw at CES are mobile phone handsets capable of receiving video broadcasts and boxes that cram video into IP packets and unleash them onto the Internet. LG seems to be way ahead on phone handsets that receive terrestrial and satellite video. The company was showing several video-capable handsets at CES. They just wouldn't let me shoot photos of them. So only the 140,000 other people at CES got to see them.

The other place-shifting product is epitomized by the Sling box from Sling Media. This oddly shaped box (looks like a large silver-colored bar of candy or a silver-colored gold bar to me), takes in video and spits packets out of an an Ethernet port. What you do with those packets is your business. Receive them on your computer at work, your laptop at Starbucks, or your Treo wherever you happen to be.

Both the mobile handsets and the Sling box need to reformat video to fit a target playback device that clearly isn't a conventional television receiver. Their ability to reformat images must satisfy three conflicting goals.

  • The video should look good.
  • The compression format used to send the video should consume very little bandwidth.
  • The amount of power required to encode and decode the compressed video should be small.

Companies that master video-compression algorithms supporting these goals will be in high demand.

Gil Scott-Heron clearly got it right in the 1970s. But in the 21st century, the revolution will be televised.

Thursday, December 15, 2005

ST backs NOC for SOC design productivity

From EE Times:

"ST says an effective NoC architecture will be a crucial precondition for cost-effective SoCs targeted at convergence devices and, in particular, NoC technology will play a major role in improving design productivity."

Networks on Chip

Earlier this week, published an article I wrote about networks on chip (NOCs) called "NOC, NOC, NOCing on Heaven's Door" (another song reference, this time to Bob Dylan). The article's based on some really great presentations I saw last month at the SOC conference held in Tampere, Finland. Cool place. Literally.

Pun of the day

The irresistable "fish with chips" from a Reuters story as published online by MSNBC. The chips are tracking devices, of course, and are made of silicon not Idaho potatoes.

Dare to be stupid, Dare to be stupid

Want to get confused? Really confused? Then take a look at this blog entry on discussing multiple processor cores in PC-processor land. Be sure to read the comments made by the informed, the uninformed, the partially informed, and the intentionally lame.

Hopefully, SOC designers aren't nearly this confused. I also hope the advice delivered in the SOC design community isn't this, er, diffuse.

Thanks to Tensilica's Lee Vick for the pointer to the blog.

(BTW: The title of this blog entry is a reference to a Wierd Al song that apes the music of Devo.)

Wednesday, December 14, 2005

SOC Design: Just what do you optimize?

A recent article and an unrelated analyst presentation give excellent advice to SOC designers and managers comtemplating the plunge below 100nm. The rules of system design below this lithography threshold change and the article and the presentation provide some partial roadmaps to success.

EE Time's EDA editor Richard Goering wrote a recent column on Design for Inefficiency that questions how SOC design teams trade off transistor budgets for time to market. Sound like heresy? I remind you, oh gentle reader, that precisely the same discussions about using C for embedded systems software were occurring 20 years ago. If you haven't heard, the relatively inefficient C language won over efficient assembly code precisely because of time-to-market issues. Most of today's systems would never get to market if they were solely or even largely based on software written in assembly language.

Last week at Gartner's Semiconductor Industry Briefing held at the Doubletree in San Jose, Research VP and Chief Analyst Bryan Lewis discussed "second-generation SOCs" in his presentation titled Charting the Course for Second-Generation SOC Desvices, in which he described second-generation SOCs as high-gate-count devices using mixed process technologies, multiple processors, and multiple software layers. In Lewis' vision of a second-generation SOC, the multifunctional chip is built with multiple processor cores, each driving its own subsystem with its own operating system and application firmware. This design approach is unlike today's most common design approach of loading up one main processor with as many tasks as possible, and then some.

Lewis' second-generation vision encompasses a divide-and-conquer approach to complex system design and it closely relates to Goering's theme of asking, "Just what do you optimize?" The more you burden one processor with an increasing number of tasks, the more complex the software gets and the faster the processor must run. The result: exponentially increasing software complexity (think lost time to market and bug-riddled code) and exponentially incresing power dissipation and energy consumption (think less battery life or more expensive power supplies; noisy, expensive, and relatively unreliable cooling fans; and larger, more costly product enclosures).

Once again, the question of the decade is: "What do you optimize?" Do you optimize transistor count to absolutely minimize chip cost while greatly increasing design time and cost and possibly missing market windows, or do you waste truly cheap transistors to buy back some of that time?

I think the answer's pretty clear: 90nm and 65nm transistors are cheap and engineering time is expensive. Lost time to market is virtually priceless. What do you think?

More wisdom from Jack

Jack Ganssle, who writes for Embedded Systems Design and almost always has interesting things to say. His latest column on NRE versus cost of goods sold is no exception. See it here. Although Jack is writing about purchased software in his column, his arguments are equally applicable to IP blocks for SOC designs.

Thursday, December 01, 2005

The Lessons of History

Lessons of history from Leslie Berlin’s “The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley”

Everyone knows, and no one remembers, that history repeats itself. This maxim is true even in the short history of the electronics industry. Here are a few excerpts from Dr. Leslie Berlin’s Bob Noyce biography, “The Man Behind the Microchip,” that serve as gentle reminders:

1. The time is late in 1949, two years after Bell Labs announces the creation of the transistor. Bob Noyce has just started his first year of graduate studies at MIT:

“… [Wayne Nottingham’s] Physical Electronics seminar might well have been Noyce’s only direct instruction on the topic [transistors] that year, for MIT had yet to incorporate the transistor into its formal curriculum. Nottingham’s Electronics class, for example, did not mention the device at all in 1949. The transistor was a new technology, and it had very real problems. It was hard to build a functional point-contact transistor; indeed, simply replicating the Bell team’s results was difficult. Vacuum tubes, by contrast, were entering their heyday: they were far cheaper and more stable than ever before. No one—certainly not Nottingham—saw any evidence to indicate that the point-contact transistor would be in a position to replace tubes for a long, long time.”

Within the next 10 years, Bob Noyce would join the transistor research group at Philco; in 1956 he would then leave Philco and join Shockley Transistor Labs in Palo Alto; and then less than two years later he would found Fairchild Semiconductor with seven other Shockley refugees/traitors. In 1959, only 10 years after Noyce started his MIT graduate work, Fairchild’s Jean Hoerni would develop the planar process with its protective coating of silicon dioxide, which tremendously boosted transistor ruggedness and reliability and gave Noyce the missing piece of the IC puzzle. Hoerni’s development of the planar process enabled the invention of the integrated circuit and is the bedrock foundation of all semiconductor manufacturing more than 40 years later. Today, we take transistors for granted, but in 1949 they were weak, unreliable laboratory curiosities with no hope of competing against five decades of vacuum tube R&D.

2. The year is 1961. In March of this year, Fairchild Semiconductor introduced the first integrated circuits, dubbed Micrologic:

“The reaction was gratifying but did not translate into widespread adoption. By the end of 1961, Fairchild had sold fewer than $500,000 of its Micrologic devices, which were priced at about $100 apiece. Texas Instruments, the only other major supplier, was having such problems selling integrated circuits that it cut prices from $435 to $76 in 90 days. The move had little effect.

Customers’ objections to integrated circuit technology abounded. The devices were extremely expensive relative to discrete components—up to 50 times the cost for comparable performance, albeit in a smaller package. Many engineers, designers, and purchasing agents working for Fairchild’s customers feared that integrated circuits would put them out of work. For decades, these customers had designed the circuits they needed from off-the-shelf transistors [and vacuum tubes before transistors], resistors, and capacitors that they bought from manufacturers like Fairchild. Now Noyce wanted to move the Fairchild integrated circuit team into designing and building standard circuits that would be sold to customers as a fait accompli. If the integrated circuit manufacturers designed and built the circuits themselves, what would the engineers at the customer companies do? Moreover, why would a design engineer with a quarter century’s experience want to buy a circuits designed by [a] 30-year-old employee of a semiconductor manufacturing firm? And furthermore, while silicon was ideal for transistors, there were better materials for making the resistors and capacitors that would be built into the integrated circuit. Making these other components out of silicon might degrade the overall performance of the circuits.

As late as the spring of 1963, most manufacturers believed that integrated circuits would not be commercially viable for some time, telling visitors to their booths at an industry trade show that ‘these items would remain on the R&D level until a breakthrough occurs in technology and until designs are vastly perfected.’”

In the spring of 1964, Bob Noyce started selling Micrologic flip-flops for less than the cost of the discrete components needed to build an equivalent flip-flop, and for less than the manufacturing cost of the IC:

“Less than a year after the dramatic price cuts, the market [for ICs] had so expanded that Fairchild received a single order (for half-a-million circuits) that was the equivalent of 20 percent of the entire industry’s output of circuits for the previous year. One year later, in 1966, computer manufacturer Burroughs placed an order with Fairchild for 20 million integrated circuits.”

“By the middle of the 1960s, Fairchild was one of the fastest-growing companies in the United States.”

Today, we take the integrated circuit and Moore’s Law for granted, but in the early 1960s, their future was anything but sure.

3. By 1968, Bob Noyce was fed up with the management at Fairchild Camera and Instrument, the parent company of Fairchild Semiconductor. He and Gordon Moore incorporated a company called NM Electronics in July, 1968. NM Electronics would become Intel by the end of the year.

“Just as the industry’s high hopes for integrated circuits had launched the earlier gaggle of startup companies, the 1968-1969 generation was inspired by a belief that semiconductors were on the cusp of another dramatic technological breakthrough… Already, Moore’s own R&D group at Fairchild had fit a once-unthinkable 1,024 transistors onto a single circuit. In 1968, this circuit was little more than a lab curiosity, but the general consensus held that circuits with more than 1,000 components integrated together—so-called Large Scale Integrated circuits—should be physically possible to mass produce by 1970.”

However, large-scale integration (LSI) was certainly not considered a slam-dunk technological leap in 1968:

“… Noyce and Moore, in fact, had settled on computer memories as a first product not primarily because the computer market was growing—although that was a welcome reality—but because memories would be the easiest types of LSI circuits to build. … In Moore’s words, LSI was a “technology looking for applications” in 1968. Which is to say: if LSI technology was going to work anywhere, it would work first in memories. If it worked in memories, Noyce and Moore could anticipate an ever-growing market of computer makers ready to buy.”

The plan therefore, was to replace core memories with semiconductor memory. Core memory, developed in the early 1950s, pervaded computer design in the late 1960s. Mainframes and minicomputers used core memories because there simply was no alternative memory technology that could compete with core memory on the basis of cost/bit, energy consumption, or size. Even so, core memory was vulnerable to attack if a superior, more cost-effective technology could be developed:

“Magnetic cores had their shortcomings, however, and in these Noyce and Moore had seen a potential foothold for Intel. Cores were not a particularly fast means of storing data. … Moreover, the core memories were built by hand. Every one of those iron donuts was individually strung on a wire by a woman in a factory, most likely in Asia. Noyce and Moore knew that this labor-intensive means of production was not sustainable for a computer market growing exponentially, just as they had known a decade earlier that hand-wired discrete components could not serve the exploding market for space-age electronics.”

Even with these shortcomings, Noyce and Moore knew by now that not all technological improvements are immediately hailed by design engineers:

“Moore and Noyce knew that the problems with cores were irrelevant to most computer engineers, who did not spend their time thinking about how they would build their machines ten years in the future. These engineers cared about how their computers work now, and so the cost advantages of semiconductor memory would have to be overwhelming before engineers would consider abandoning the clunky, but reliable, magnetic cores. A sense of déjà vu may again have struck Noyce and Moore, who faced a similar obstacle when they initially brought the integrated circuit to market. Noyce, the architect of Fairchild’s decision to sell integrated circuits below cost to get a foothold in the discrete components market, was betting a similar strategy would work for semiconductor memories.”

These first memories weren’t easy to manufacture, and the low yields didn’t allow them to be sold cheaply. Intel’s first MOS memory, the 256-bit 1101 static RAM (SRAM), was too slow and expensive when it was introduced in September, 1969. At 20-60 cents/bit, it was 5x to 12x more expensive than core memory per bit. Even reducing the price of the 1101 by 75% didn’t help sales.

However, Intel pressed on and introduced the 1-Kbit 1103 dynamic RAM (DRAM) in October, 1970:

“To be sure, the device was far from perfect. Among the 1103’s many failings known to Intel was the fact that, in Andy Grove’s words, ‘under certain adverse conditions, the thing just couldn’t remember’—a problem for a memory. Some 1103s failed when they were shaken. A few developed moisture under the glass used to seal them. Often no one knew why the devices would stop working. The problems inspired Ted Hoff to write a 28-page memo explaining the 1103’s operation and quirks.

Andy Grove had nightmares that boxes and boxes of 1103s would be returned to the company for defects—and would run Intel entirely. Gordon Moore, on the other hand, wondered if, in some perverse way, the 1103’s problems made it easier to convince customers to use the device. Engineers who specialized in core memories recognized analogs in the 1103. Both suffered from voltage and pattern sensitivity. … ‘All these things made the 1103 more challenging and less threatening to engineers [at customer companies],’ Moore explains. ‘We did not plan it to happen this way, but I think that if [the 1103] had been perfect out of the box, we would have had a lot more resistance [to it] from our customers.’”

By 1972, Intel was building 100,000 1103 DRAMs per year and was still unable to meet demand for the device, even with a Canadian second source that had paid Intel millions of dollars for the second-source rights to the device. Essentially, all of Intel’s 1972 revenue, $23.4 million, came from sales of the 1103.

4. Today, Intel’s no longer in the memory business—microprocessors are now the company’s bread and butter. Intel entered the microprocessor business completely by accident. It then took a lot of missionary work before the microprocessor became a successful product category:

“Intel’s microprocessor story opens in the spring of 1969, around the time that [Gordon] Moore called [Bob] Noyce in Aspen to tell him that the MOS team had a working silicon-gate memory. A manager from a Japanese calculator company called Busicom, which was planning to build a family of high-performance calculators, contacted either [marketing manager] Bob Graham or Noyce to ask if Intel, which had a small business building custom chips designed by customers, would like to manufacture a chip set that would run the calculator. Calculator companies around the world were seeking out semiconductor companies to build chips for their machines, and Noyce said that Intel was nearly the only manufacturer left who had not already agreed to work with a calculator company. … Busicom, which was designing a particularly complex calculator, wanted a set of a dozen specialized chips with 3,000 to 5,000 transistors each. Busicom planned to send a team of engineers to Intel to design the chips on-site and would pay Intel $100,000 to manufacture its calculator chip sets. Busicom expected to pay Intel about $50 for each set manufactured and promised to buy at least 60,000 of them. Intel agreed to this arrangement.”

Noyce made Ted Hoff, Intel’s resident computer expert, the company liaison to the Busicom design team. Hoff did more than he was assigned. He took a technical interest in the Busicom design and soon concluded that disaster was on the horizon. The projected transistor count for each chip was beyond the state of the art and the large number of chips in the set was going to drive the component cost well above the $50 target.

Hoff developed an alternative scheme based on a general-purpose programmable chip that would act like a small computer processor. This device could then be programmed using memory devices (Intel’s primary intended product line at the time) and the whole Busicom system could then be built using far fewer device types. The programmable device, of course, was a microprocessor. Hoff tried to convince the Busicom engineers to shift their direction, but he failed to convince them.

Consequently, Hoff went to Noyce and convinced him. Noyce told Hoff to go off and develop his idea, just in case the predicted disaster materialized. By August, 1969, Noyce wrote to the president of Busicom and took Hoff’s position. There was no way that Intel would be able to manufacture the chip set currently under development and sell it to Busicom for $50/set. Noyce estimated that the price would be more like $300/set. He asked if Busicom still wanted to continue the project.

In September, Bob Graham sent a similar letter but also suggested that Intel had developed an in-house alternative that might better meet Busicom’s cost target. Busicom sent two executives to Intel in October. They considered the alternatives and chose Intel’s approach, with a projected cost of $155/chip set. Then, Hoff and Intel did nothing. The agreement wasn’t signed until February, 1970. In March, 1970, Busicom sent a “How’s it going?” letter to Intel. Only then did Intel hire Federico Faggin to work on the microprocessor’s design. In nine short months, he designed and then produced working samples of the four chips in the Intel calculator chip set.

However, the calculator market had gotten competitive and Busicom indicated that it wanted to negotiate a price reduction, even before volume production started. Noyce asked Hoff for advice on the contract renegotiation. Hoff said to place highest priority on the right to sell the chips to other customers. The renegotiation was stalled until August, 1971 and finalized in September. Intel had gotten the right to sell the microprocessor, which it introduced as the 4004 in November, 1971, a month after Intel’s IPO.

However, the microprocessor was not an overnight sensation. For example, Noyce foresaw the microprocessor’s use in automobiles and went to General Motors in 1971 to talk about adopting microprocessors for automotive applications. GM already had an automotive electronics program underway but the GM execs were skeptical that someting as advanced as Intel's computer-on-a-chip would be controlling vehicle brakes or anything else inside of a car soon.

“… Noyce almost certainly told them, their skepticism was well grounded. No one would want the 4004 controlling brakes in production cars; the device was too slow and too rudimentary for general use. And its successor, the 8008 (introduced in April 1972) was not much better.

But Noyce was not trying to sell 4004s or 8008s to General Motors. He was starting conversations that he expected would only bear fruit years later. He knew he was contending with entrenched ways of thinking and years-long design cycles. He felt confident that by the time these customers were prepared to experiment with microprocessors, the technology would have caught up with his visions for it. And indeed it did.”

Thursday, October 06, 2005

The Silicon Steamroller

In an October 5 article, EE Times' editor Dylan McGrath writes: "There is a widespread misconception about the current size and strength of the Chinese fabless semiconductor industry, according to Lung Chu, president of the Asia Pacific region for Cadence Design Systems Inc...

Chu said total revenue for Chinese fabless companies in 2004 was less than $1 billion and that most of the companies' designs are 0.18 micron or 0.25 micron."

So, things look pretty good still for the rest of the world, which seems to hold the high ground of advanced semiconductor design. That is, until you couple this October 4 story about IC mask making written by Richard Goering, also for EE Times. Goering writes about this year's version of an annual mask-usage study sponsored by Sematech and conducted by Shelton Consulting:

"Only 5 percent of IC photomasks are below 100 nm...according to a 'mask industry assessment' study presented at the BACUS Photomask Technology symposium... According to the study results, just under 50 percent of masks use 350 nm or greater ground rules, 12 percent are below 130 nm, 5 percent are below 100 nm and just 0.8 percent are below 70 nm. The study looked at volumes, not revenues or IC transistor counts."

Using these numbers, by my count Chinese fabless design companies can already handle well over 50%, and perhaps as much as 80%, of the designs being created today. That fraction will increase rapidly over the next few years as the design houses in China climb the design learning curve.

As a country, China has proven many times over that it can steamroller any learning curve it wishes. The only way to avoid being crushed by a steamroller is to find a way to run faster than the steamroller or find a faster vehicle to escape. It's foolish and dangerous to think that the steamroller will run out of fuel before it can reach you.