Lessons of history from Leslie Berlin’s “The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley”
Everyone knows, and no one remembers, that history repeats itself. This maxim is true even in the short history of the electronics industry. Here are a few excerpts from Dr. Leslie Berlin’s Bob Noyce biography, “The Man Behind the Microchip,” that serve as gentle reminders:
1. The time is late in 1949, two years after Bell Labs announces the creation of the transistor. Bob Noyce has just started his first year of graduate studies at MIT:
“… [Wayne Nottingham’s] Physical Electronics seminar might well have been Noyce’s only direct instruction on the topic [transistors] that year, for MIT had yet to incorporate the transistor into its formal curriculum. Nottingham’s Electronics class, for example, did not mention the device at all in 1949. The transistor was a new technology, and it had very real problems. It was hard to build a functional point-contact transistor; indeed, simply replicating the Bell team’s results was difficult. Vacuum tubes, by contrast, were entering their heyday: they were far cheaper and more stable than ever before. No one—certainly not Nottingham—saw any evidence to indicate that the point-contact transistor would be in a position to replace tubes for a long, long time.”
Within the next 10 years, Bob Noyce would join the transistor research group at Philco; in 1956 he would then leave Philco and join Shockley Transistor Labs in Palo Alto; and then less than two years later he would found Fairchild Semiconductor with seven other Shockley refugees/traitors. In 1959, only 10 years after Noyce started his MIT graduate work, Fairchild’s Jean Hoerni would develop the planar process with its protective coating of silicon dioxide, which tremendously boosted transistor ruggedness and reliability and gave Noyce the missing piece of the IC puzzle. Hoerni’s development of the planar process enabled the invention of the integrated circuit and is the bedrock foundation of all semiconductor manufacturing more than 40 years later. Today, we take transistors for granted, but in 1949 they were weak, unreliable laboratory curiosities with no hope of competing against five decades of vacuum tube R&D.
2. The year is 1961. In March of this year, Fairchild Semiconductor introduced the first integrated circuits, dubbed Micrologic:
“The reaction was gratifying but did not translate into widespread adoption. By the end of 1961, Fairchild had sold fewer than $500,000 of its Micrologic devices, which were priced at about $100 apiece. Texas Instruments, the only other major supplier, was having such problems selling integrated circuits that it cut prices from $435 to $76 in 90 days. The move had little effect.
Customers’ objections to integrated circuit technology abounded. The devices were extremely expensive relative to discrete components—up to 50 times the cost for comparable performance, albeit in a smaller package. Many engineers, designers, and purchasing agents working for Fairchild’s customers feared that integrated circuits would put them out of work. For decades, these customers had designed the circuits they needed from off-the-shelf transistors [and vacuum tubes before transistors], resistors, and capacitors that they bought from manufacturers like Fairchild. Now Noyce wanted to move the Fairchild integrated circuit team into designing and building standard circuits that would be sold to customers as a
fait accompli. If the integrated circuit manufacturers designed and built the circuits themselves, what would the engineers at the customer companies do? Moreover, why would a design engineer with a quarter century’s experience want to buy a circuits designed by [a] 30-year-old employee of a semiconductor manufacturing firm? And furthermore, while silicon was ideal for transistors, there were better materials for making the resistors and capacitors that would be built into the integrated circuit. Making these other components out of silicon might degrade the overall performance of the circuits.
As late as the spring of 1963, most manufacturers believed that integrated circuits would not be commercially viable for some time, telling visitors to their booths at an industry trade show that ‘these items would remain on the R&D level until a breakthrough occurs in technology and until designs are vastly perfected.’”
In the spring of 1964, Bob Noyce started selling Micrologic flip-flops for less than the cost of the discrete components needed to build an equivalent flip-flop, and for less than the manufacturing cost of the IC:
“Less than a year after the dramatic price cuts, the market [for ICs] had so expanded that Fairchild received a single order (for half-a-million circuits) that was the equivalent of 20 percent of the entire industry’s output of circuits for the previous year. One year later, in 1966, computer manufacturer Burroughs placed an order with Fairchild for 20 million integrated circuits.”
“By the middle of the 1960s, Fairchild was one of the fastest-growing companies in the United States.”
Today, we take the integrated circuit and Moore’s Law for granted, but in the early 1960s, their future was anything but sure.
3. By 1968, Bob Noyce was fed up with the management at Fairchild Camera and Instrument, the parent company of Fairchild Semiconductor. He and Gordon Moore incorporated a company called NM Electronics in July, 1968. NM Electronics would become Intel by the end of the year.
“Just as the industry’s high hopes for integrated circuits had launched the earlier gaggle of startup companies, the 1968-1969 generation was inspired by a belief that semiconductors were on the cusp of another dramatic technological breakthrough… Already, Moore’s own R&D group at Fairchild had fit a once-unthinkable 1,024 transistors onto a single circuit. In 1968, this circuit was little more than a lab curiosity, but the general consensus held that circuits with more than 1,000 components integrated together—so-called Large Scale Integrated circuits—should be physically possible to mass produce by 1970.”
However, large-scale integration (LSI) was certainly not considered a slam-dunk technological leap in 1968:
“… Noyce and Moore, in fact, had settled on computer memories as a first product not primarily because the computer market was growing—although that was a welcome reality—but because memories would be the easiest types of LSI circuits to build. … In Moore’s words, LSI was a “technology looking for applications” in 1968. Which is to say: if LSI technology was going to work anywhere, it would work first in memories. If it worked in memories, Noyce and Moore could anticipate an ever-growing market of computer makers ready to buy.”
The plan therefore, was to replace core memories with semiconductor memory. Core memory, developed in the early 1950s, pervaded computer design in the late 1960s. Mainframes and minicomputers used core memories because there simply was no alternative memory technology that could compete with core memory on the basis of cost/bit, energy consumption, or size. Even so, core memory was vulnerable to attack if a superior, more cost-effective technology could be developed:
“Magnetic cores had their shortcomings, however, and in these Noyce and Moore had seen a potential foothold for Intel. Cores were not a particularly fast means of storing data. … Moreover, the core memories were built by hand. Every one of those iron donuts was individually strung on a wire by a woman in a factory, most likely in Asia. Noyce and Moore knew that this labor-intensive means of production was not sustainable for a computer market growing exponentially, just as they had known a decade earlier that hand-wired discrete components could not serve the exploding market for space-age electronics.”
Even with these shortcomings, Noyce and Moore knew by now that not all technological improvements are immediately hailed by design engineers:
“Moore and Noyce knew that the problems with cores were irrelevant to most computer engineers, who did not spend their time thinking about how they would build their machines ten years in the future. These engineers cared about how their computers work now, and so the cost advantages of semiconductor memory would have to be overwhelming before engineers would consider abandoning the clunky, but reliable, magnetic cores. A sense of déjà vu may again have struck Noyce and Moore, who faced a similar obstacle when they initially brought the integrated circuit to market. Noyce, the architect of Fairchild’s decision to sell integrated circuits below cost to get a foothold in the discrete components market, was betting a similar strategy would work for semiconductor memories.”
These first memories weren’t easy to manufacture, and the low yields didn’t allow them to be sold cheaply. Intel’s first MOS memory, the 256-bit 1101 static RAM (SRAM), was too slow and expensive when it was introduced in September, 1969. At 20-60 cents/bit, it was 5x to 12x more expensive than core memory per bit. Even reducing the price of the 1101 by 75% didn’t help sales.
However, Intel pressed on and introduced the 1-Kbit 1103 dynamic RAM (DRAM) in October, 1970:
“To be sure, the device was far from perfect. Among the 1103’s many failings known to Intel was the fact that, in Andy Grove’s words, ‘under certain adverse conditions, the thing just couldn’t remember’—a problem for a memory. Some 1103s failed when they were shaken. A few developed moisture under the glass used to seal them. Often no one knew why the devices would stop working. The problems inspired Ted Hoff to write a 28-page memo explaining the 1103’s operation and quirks.
Andy Grove had nightmares that boxes and boxes of 1103s would be returned to the company for defects—and would run Intel entirely. Gordon Moore, on the other hand, wondered if, in some perverse way, the 1103’s problems made it easier to convince customers to use the device. Engineers who specialized in core memories recognized analogs in the 1103. Both suffered from voltage and pattern sensitivity. … ‘All these things made the 1103 more challenging and less threatening to engineers [at customer companies],’ Moore explains. ‘We did not plan it to happen this way, but I think that if [the 1103] had been perfect out of the box, we would have had a lot more resistance [to it] from our customers.’”
By 1972, Intel was building 100,000 1103 DRAMs per year and was still unable to meet demand for the device, even with a Canadian second source that had paid Intel millions of dollars for the second-source rights to the device. Essentially, all of Intel’s 1972 revenue, $23.4 million, came from sales of the 1103.
4. Today, Intel’s no longer in the memory business—microprocessors are now the company’s bread and butter. Intel entered the microprocessor business completely by accident. It then took a lot of missionary work before the microprocessor became a successful product category:
“Intel’s microprocessor story opens in the spring of 1969, around the time that [Gordon] Moore called [Bob] Noyce in Aspen to tell him that the MOS team had a working silicon-gate memory. A manager from a Japanese calculator company called Busicom, which was planning to build a family of high-performance calculators, contacted either [marketing manager] Bob Graham or Noyce to ask if Intel, which had a small business building custom chips designed by customers, would like to manufacture a chip set that would run the calculator. Calculator companies around the world were seeking out semiconductor companies to build chips for their machines, and Noyce said that Intel was nearly the only manufacturer left who had not already agreed to work with a calculator company. … Busicom, which was designing a particularly complex calculator, wanted a set of a dozen specialized chips with 3,000 to 5,000 transistors each. Busicom planned to send a team of engineers to Intel to design the chips on-site and would pay Intel $100,000 to manufacture its calculator chip sets. Busicom expected to pay Intel about $50 for each set manufactured and promised to buy at least 60,000 of them. Intel agreed to this arrangement.”
Noyce made Ted Hoff, Intel’s resident computer expert, the company liaison to the Busicom design team. Hoff did more than he was assigned. He took a technical interest in the Busicom design and soon concluded that disaster was on the horizon. The projected transistor count for each chip was beyond the state of the art and the large number of chips in the set was going to drive the component cost well above the $50 target.
Hoff developed an alternative scheme based on a general-purpose programmable chip that would act like a small computer processor. This device could then be programmed using memory devices (Intel’s primary intended product line at the time) and the whole Busicom system could then be built using far fewer device types. The programmable device, of course, was a microprocessor. Hoff tried to convince the Busicom engineers to shift their direction, but he failed to convince them.
Consequently, Hoff went to Noyce and convinced him. Noyce told Hoff to go off and develop his idea, just in case the predicted disaster materialized. By August, 1969, Noyce wrote to the president of Busicom and took Hoff’s position. There was no way that Intel would be able to manufacture the chip set currently under development and sell it to Busicom for $50/set. Noyce estimated that the price would be more like $300/set. He asked if Busicom still wanted to continue the project.
In September, Bob Graham sent a similar letter but also suggested that Intel had developed an in-house alternative that might better meet Busicom’s cost target. Busicom sent two executives to Intel in October. They considered the alternatives and chose Intel’s approach, with a projected cost of $155/chip set. Then, Hoff and Intel did nothing. The agreement wasn’t signed until February, 1970. In March, 1970, Busicom sent a “How’s it going?” letter to Intel. Only then did Intel hire Federico Faggin to work on the microprocessor’s design. In nine short months, he designed and then produced working samples of the four chips in the Intel calculator chip set.
However, the calculator market had gotten competitive and Busicom indicated that it wanted to negotiate a price reduction, even before volume production started. Noyce asked Hoff for advice on the contract renegotiation. Hoff said to place highest priority on the right to sell the chips to other customers. The renegotiation was stalled until August, 1971 and finalized in September. Intel had gotten the right to sell the microprocessor, which it introduced as the 4004 in November, 1971, a month after Intel’s IPO.
However, the microprocessor was not an overnight sensation. For example, Noyce foresaw the microprocessor’s use in automobiles and went to General Motors in 1971 to talk about adopting microprocessors for automotive applications. GM already had an automotive electronics program underway but the GM execs were skeptical that someting as advanced as Intel's computer-on-a-chip would be controlling vehicle brakes or anything else inside of a car soon.
“… Noyce almost certainly told them, their skepticism was well grounded. No one would want the 4004 controlling brakes in production cars; the device was too slow and too rudimentary for general use. And its successor, the 8008 (introduced in April 1972) was not much better.
But Noyce was not trying to sell 4004s or 8008s to General Motors. He was starting conversations that he expected would only bear fruit years later. He knew he was contending with entrenched ways of thinking and years-long design cycles. He felt confident that by the time these customers were prepared to experiment with microprocessors, the technology would have caught up with his visions for it. And indeed it did.”