How AI is reinventing what computers are
Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there's something remarkable going on.
Google's latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a neural engine," also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it's changing how we think about computing.
What does that mean? Well, computers haven't changed much in 40 or 50 years. They're smaller and faster, but they're still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they're programmed, and how they're used. Ultimately, it will change what they are for.
The core of computing is changing from number-crunching to decision-making," says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.
More haste, less speedThe first change concerns how computers-and the chips that control them-are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore's Law.
But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it's available when and where it's needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.
Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.
Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.
For example, the chip inside the Pixel 6 is a new mobile version of Google's tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people's photos and natural-language search queries. Google's sister company DeepMind uses them to train its AIs.
In the last couple of years, Google has made TPUs available to other companies, and these chips-as well as similar ones being developed by others-are becoming the default inside the world's data centers.
AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithm-a type of AI that learns how to solve a task through trial and error-to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of-but they worked. This kind of AI could one day develop better, more efficient chips.
Show, don't tellThe second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.
Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.
With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It's a fundamentally different way of thinking.
Examples of this are already commonplace: speech recognition and image identification are now standard features on smartphones. Other examples made headlines, as when AlphaZero taught itself to play Go better than humans. Similarly, AlphaFold cracked open a biology problem-working out how proteins fold-that people had struggled with for decades.
For Bishop, the next big breakthroughs are going to come in molecular simulation: training computers to manipulate the properties of matter, potentially making world-changing leaps in energy usage, food production, manufacturing, and medicine.
Breathless promises like this are made often. It is also true that deep learning has a track record of surprising us. Two of the biggest leaps of this kind so far-getting computers to behave as if they understand language and to recognize what is in an image-are already changing how we use them.
Computer knows bestFor decades, getting a computer to do something meant typing in a command, or at least clicking a button.
Machines no longer need a keyboard or screen for humans to interact with. Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version. But as they proliferate, we are going to want to spend less time telling them what to do. They should be able to work out what we need without being told.
This is the shift from number-crunching to decision-making that Dubey sees as defining the new era of computing.
Rus wants us to embrace the cognitive and physical support on offer. She imagines computers that tell us things we need to know when we need to know them and intervene when we need a hand. When I was a kid, one of my favorite movie [scenes] in the whole world was The Sorcerer's Apprentice,'" says Rus. You know how Mickey summons the broom to help him tidy up? We won't need magic to make that happen."
We know how that scene ends. Mickey loses control of the broom and makes a big mess. Now that machines are interacting with people and integrating into the chaos of the wider world, everything becomes more uncertain. The computers are out of their boxes.