Article 54ASP Can Software Performance Engineering Save Us From the End of Moore’s Law?

Can Software Performance Engineering Save Us From the End of Moore’s Law?

by
Charles E. Leiserson
from IEEE Spectrum on (#54ASP)

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

In the early years of aviation, one might have been forgiven for envisioning a future of ever-faster planes. Speeds had grown from 50 kilometers per hour for the Wright brothers in 1903, to about 1000 kph for a Boeing 707 in the 1960s. But since then, commercial aircraft speeds have stagnated because higher speeds make planes so energy-inefficient.

Today's computers suffer from a similar issue. For decades, our ability to miniaturize components led to us doubling the number of transistors on a silicon chip every two years or so. This phenomenon, known as Moore's Law (named after Intel co-founder Gordon Moore), has made computing exponentially cheaper and more powerful for decades. But we're now reaching the limits of miniaturization, and so computing performance is stagnating.

This is a problem. Had Moore's Law ended 20 years ago, the processors in today's computers would be roughly 1000 times less powerful, and we wouldn't have iPhones, Alexa or movie-streaming. What innovations might we miss out on 20 years from now if we can't continue to improve computing performance?

In recent years, researchers like us have been scratching our heads about what to do next. Some hope that the answer is new technologies like quantum computing, carbon nanotubes, or photonic computing. But after several years studying the situation with other experts at MIT, we believe those solutions are uncertain and could be many years in the making. In the interim, we shouldn't count on a complete reinvention of the computer chip; we should re-code the software that runs on it.

As we outline in an article this week in Science, for years programmers haven't had to worry about making code run faster, because Moore's Law did that for them. And so they took shortcuts, prioritizing their ability to write code quickly over the ability of computers to run that code as fast as possible.

For example, many developers use techniques like reduction": taking code that worked on problem A, and using it to solve problem B, even if it is an inefficient way of doing it. Suppose you want to build a Siri-like system to recognize yes-or-no voice commands. Instead of building a custom program to do that, you might be tempted to use an existing program that recognizes a wide range of words, and tweak it to respond only to yes-or-no answers.

The good news is that this approach helps you write code faster. The bad news: It sometimes yields a staggering amount of inefficiency. And inefficiencies can quickly compound. If a single reduction is 80 percent as efficient as a custom solution, and you write a program with twenty layers of reduction, the code will be 100 times less efficient than it could be.

This is no mere thought experiment. Being able to make further advances in fields like machine learning, robotics, and virtual reality will require huge amounts of computational power. If we want to harness the full potential of these technologies, we have to make changes. As our Science article suggests, there are opportunities in developing new algorithms and streamlining computer hardware. But for most companies, the most practical way to get more computing performance is through software performance engineering-that is, making software more efficient.

One performance engineering strategy is to parallelize" code. Most existing software has been designed using decades-old models that assume processors can only perform one operation at a time. That's inefficient because modern processors can do many calculations at the same time by using multiple cores on each chip, and there is parallelism built into each core as well. Strategies like parallel computing can allow some complex tasks to be completed hundreds of times faster and in a much more energy-efficient way.

While software performance engineering may be the best path forward, it won't be an easy one. Updating existing programs to run more quickly is a huge undertaking, especially with a shortage of coders trained in parallel programming and other performance-engineering strategies. Moreover, leaders of forward-looking companies must fight against the institutional inertia of doing things how they've always been done.

Nimble tech giants like Google and Amazon have already gotten this memo. The massive scale of their data centers means that even small improvements in software performance can yield big financial returns. Where these companies have led, the rest of the world must follow. For application developers, efficiency can no longer be ignored when rolling out new features and functionality. For companies, it may mean replacing long-standing software systems that are just barely eking along.

Performance engineering will be riskier than Moore's Law ever was. Companies may not know the benefits of their efforts until after they've invested substantial programmer time. And speed-ups may be sporadic, uneven, and unpredictable. But as we reach the physical limits of microprocessors, focusing on software performance engineering seems like the best option for most programmers to get more out of their computers.

The end of Moore's Law doesn't mean your laptop is about to grind to a halt. But if we want to make real progress in fields like artificial intelligence and robotics, we must get more creative and spend the time needed to performance engineer our software.

About the Authors:

Charles E. Leiserson is a professor of computer science and engineering at MIT and an IEEE Fellow; Tao B. Schardl and Neil C. Thompson are research scientists at MIT.

KoG3ud-bp_Q
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments