Article 6JX9W What is CMOS 2.0?

What is CMOS 2.0?

by
Samuel K. Moore
from IEEE Spectrum on (#6JX9W)
a-illustration-of-a-man-with-graphic-ico

CMOS, the silicon logic technology behind decades and decades of smaller transistors and faster computers, is entering a new phase. CMOS uses two types of transistors in pairs to limit a circuit's power consumption. In this new phase, CMOS 2.0," that part's not going to change, but how processors and other complex CMOS chips are made will. Julien Ryckaert, vice president of logic technologies at Imec, the Belgium-based nanotechnology research center, told IEEE Spectrum where things are headed.

Julien Ryckaert

Julien Ryckaert is vice president of logic technologies at Imec, in Belgium, where he's been involved in exploring new technologies for 3D chips, among other topics.

Why is CMOS entering a new phase?

Julien Ryckaert: CMOS was the technology answer to build microprocessors in the 1960s. Making things smaller-transistors and interconnects-to make them better worked for 60, 70 years. But that has started to break down.

Why has CMOS scaling been breaking down?

Ryckaert: Over the years, people have made system-on-chips (SoCs)-such as CPUs and GPUs-more and more complex. That is, they have integrated more and more operations onto the same silicon die. That makes sense, because it is so much more efficient to move data on a silicon die than to move it from chip to chip in a computer.

For a long time, the scaling down of CMOS transistors and interconnects made all those operations work better. But now, it's starting to be difficult to build the whole SoC, to make all of it better by just scaling the device and the interconnect. For example, SRAM [the system's cache memory] no longer scales as well as logic.

What's the solution?

Ryckaert: Seeing that something different needs to happen, we at Imec asked: Why do we scale? At the end of the day, Moore's law is not about delivering smaller transistors and interconnects, it's about achieving more functionality per unit area.

So what you are starting to see is breaking out certain functions, such as logic and SRAM, building them on separate chiplets using technologies that give each the best advantage, and then reintegrating them using advanced 3D packaging technologies. You can connect two functions that are built on the different substrates and achieve an efficiency in communication between those two functions that is competitive with how efficient they were when the two functions were on the same substrate. This is an evolution to what we call smart disintegration, or system technology co-optimization.

So is that CMOS 2.0?

Ryckaert: What we're doing in CMOS 2.0 is pushing that idea further, with much finer-grained disintegration of functions and stacking of many more dies. A first sign of CMOS 2.0 is the imminent arrival of backside-power-delivery networks. On chips today, all interconnects-both those carrying data and those delivering power-are on the front side of the silicon [above the transistors]. Those two types of interconnect have different functions and different requirements, but they have had to exist in a compromise until now. Backside power moves the power-delivery interconnects to beneath the silicon, essentially turning the die into an active transistor layer which is sandwiched between two interconnect stacks, each stack having a different functionality.

Will transistors and interconnects still have to keep scaling in CMOS 2.0?

Ryckaert: Yes, because somewhere in that stack, you will still have a layer that still needs more transistors per unit area. But now, because you have removed all the other constraints that it once had, you are letting that layer nicely scale with the technology that is perfectly suited for it. I see fascinating times ahead.

This article appears in the March print issue as 5 Questions for Julien Ryckaert."

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments