The Chip Shortage, Giant Chips, and the Future of Moore’s Law
With COVID-19 shaking the global supply chain like an angry toddler with a box of jelly beans, the average person had to take a crash course in the semiconductor industry. And many of them didn't like what they learned. Want a new car? Tough luck, not enough chips. A new gaming system? Same. But you are not the average person, dear reader. So, in addition to learning why there was a chip shortage in the first place, you also discovered that you can-with considerable effort-fit more than two trillion transistors on a single chip. You also found that the future of Moore's Law depends as much on where you put the wires as on how small the transistors are, among many other things.
So to recap the semiconductor stories you read most this year, we've put together this set of highlights:
This year you learned the same thing that some car makers did: Even if you think you've hedged your bets by having a diverse set of suppliers, those suppliers-or the suppliers of those suppliers-might all be using the output of the same small set of semiconductor fabs.
To recap: Car makers panicked and cancelled orders at the outset of the pandemic. Then when it seemed people still wanted cars, they discovered that all of the display drivers, power management chips, and other low-margin stuff they needed had already been sucked up into the work/learn/live-from-home consumer frenzy. By the time they got back in line to buy chips, that line was nearly a year long, and it was time to panic again.
Chipmakers worked flat out to meet demand and have unleashed a blitz of expansion, though most of that is aimed at higher-margin chips than those that clogged the engine of the automotive sector. The latest numbers from chip manufacturing equipment industry association SEMI, show sales of equipment set to cross US $100-billion in 2021. A mark never before reached.
As for car makers, they may have learned their lesson. At a gathering of stakeholders in the automotive electronics supply chain this summer at GlobalFoundries Fab 8 in Malta, N.Y., there was enthusiastic agreement that car makers and chip makers needed to get cozy with each other. The result? GlobalFoundries has already inked agreements with both Ford and BMW.
Next Gen Chips Will be Powered From Below TransistorsYou can make transistors as small as you want, but if you can't connect them up to each other, there's no point. So Arm and the Belgian research institute imec spent a few years finding room for those connections. The best scheme they found was to take the interconnects that carry power to logic circuits (as opposed to data) and bury them under the surface of the silicon, linking them to a power delivery network built on the backside of the chip. This research trend suddenly became news when Intel said what sounded like: "Oh yeah. We're definitely doing that in 2025."
Cerebras' New Monster AI Chip Adds 1.4 Trillion TransistorsWhat has 2.6 trillion transistors, consumes 20 kilowatts, and carries enough internal bandwidth to stream a billion Netflix movies? It's generation 2 of the biggest chip ever made, of course! (And yes, I know that's not how streaming works, but how else do you describe 220 petabits per second of bandwidth?) Last April, Cerebras Systems topped its original, history-making AI processor with a version built using a more advanced chip-making technology. The result was a more than-doubling of the on-chip memory to an impressive 40 gigabytes, an increase in the number of processor cores from the previous 400,000 to a speech-stopping 850,000, and a mind-boggling boost of 1.4 trillion additional transistors.
Gob-smacking as all that is, what you can do with it is really what's important. And later in the year, Cerebras showed a way for the computer that houses its Wafer Scale Engine 2 to train neural networks with as many as 120 trillion parameters. For reference, the massive-and occasionally foul-mouthed-GPT-3 natural language processor has 175 billion. What's more, you can now link up to 192 of these computers together.
Of course, Cerebras' computers aren't the only ones meant to tackle absolutely huge AI training jobs. SambaNova is after the same title, and clearly Google has it's eye on some awfully big neural networks, too.
IBM Introduces the World's First 2-nm Node ChipIBM claimed to have developed what it called a 2-nanometer node chip and expects to see it in production in 2024. To put that in context, leading chipmakers TSMC and Samsung are going full-bore on 5 nm, with a possible cautious start for 3 nm in 2022. As we reminded you last year, what you call a technology process node has absolutely no relation to the size of any part of the transistors it constructs. So whether IBM's process is any better than rivals will really come down to the combination of density, power consumption, and performance.
The real importance is that IBM's process is another endorsement of nanosheet transistors as the future of silicon. While each big chipmaker is moving from today's FinFET design to nanosheets at their own pace, nanosheets are inevitable.
RISC-V Star Rises Among Chip Developers WorldwideThe news hasn't all been about transistors. Processor architecture is increasingly important. Your smartphones' brains are probably based on an Arm architecture, your laptop and the servers its so attached too are likely based on the x86 architecture. But a fast-growing cadre of companies, particularly in Asia, are looking to an open-source chip architecture called RISC-V. The attraction is to allow startups to design custom chips without the costly licensing fees for proprietary architectures.
Even big companies like Nvidia are incorporating it and Intel expects RISC V to boost its foundry business. Seeing RISC V as a possible path to independence in an increasingly polarized technology landscape, Chinese firms are particularly bullish on RISC V. Only last month, Alibaba said it would make the source code available for its RISC V core.
New Optical Switch up to 1000x Faster Than TransistorsAlthough certain types of optical computing are getting closer, the switch researchers in Russia and at IBM described in October is likely for a far future computer. Relying on exotic stuff like exciton-polaritons and Bose-Einstein condensates, the device swtiched at about 1 trillion times per second. That's so fast light would only manage about one third of a millimeter before the device switches again.
New Type of DRAM Could Accelerate AIOne of AI's big problems is that its data is so far away. Sure, that distance is measured in millimeters, but these days that's a long way. (Somewhere there's an Intel 4004 saying, "Back in my day, data had to go 30 centimeters, uphill, in a snowstorm.") There are lots of ways engineers are coming up with to shorten that distance. But this one really caught your attention:
Instead of building DRAM from silicon transistors and a metal capacitor built above it, use a second transistor as the capacitor and build them both above the silicon from oxide semiconductors. Two research groups showed that these transistors could keep their data way longer than ordinary DRAM, and they could be stacked in layers above the silicon, giving a much shorter path between the processor an its precious data.
Intel Unveils Big Processor Architecture ChangesIn August Intel unveiled what it called the company's biggest processor architecture advances in a decade. They included two new x86 CPU core architectures-the straightforwardly-named Performance-core (P-core) and Efficient-core (E-core). The cores are integrated into Alder Lake, a "performance hybrid" family of processors that includes new tech to let the upcoming Windows 11 OS run CPUs more efficiently.
"This is an awesome time to be a computer architect," senior vice president and general manager Raja Koduri said at the time. The new architectures and SoCs Intel unveiled "demonstrate how architecture will satisfy the crushing demand for more compute performance as workloads from the desktop to the data center become larger, more complex, and more diverse than ever."
If you want, you could translate that as: "In your face, process technology and device scaling! It's all about the architecture now!" But I don't think Koduri would take it that far.
U.S. Takes Strategic Step to Onshore Electronics ManufacturingA bit alarmed by just how geographically close China is to Taiwan and Samsung, the only two countries capable of making the most advanced logic chips, U.S. lawmakers got the ball rolling on an effort to boost cutting edge chip making in the United States. Some of that has already started with TSMC, Samsung, and Intel making major fab investments. Of course, Taiwan and South Korea are also making major domestic investments, as are Europe and Japan.
It's all part of a broader economic and technological nationalism playing out across the world, notes geopolitical futurist Abishur Prakash with the Center for Innovating the Future, in Toronto. Some see these shifts in geopolitics as short term, as if they're byproducts of the pandemic and that things on a certain timeline will calm down if not return to normal," he told IEEE Spectrum in May. That's wrong. The direction that nations are moving now is the new permanent north star."
Event-Based Camera Chips Are Here, What's Next?Hey, remember all that brain-based processing stuff we've been banging on about for decades? Well it's here now, in the form of a camera chip made by French startup Prophesee and major imager manufacturer Sony. Unlike a regular imager, this chip doesn't capture frame after frame with each tick of the clock. Instead it only notes the changes in a scene. That means both much lower power-when there's nothing happening, there's nothing to see-and faster response times.