The economy is down, but AI is hot. Where do we go from here?
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Oh man, it's brutal out there. One by one, the world's richest tech companies have announced massive layoffs. Just last week, Alphabet announced it was laying off 12,000 people. There have been bruising rounds of layoffs at Amazon, Meta, Microsoft, and Twitter, too, affecting not only individual AI researchers but entire AI teams.
It was heartbreaking to read over the weekend about how some Googlers in the US found out about the company's abrupt cull. Dan Russell, a research scientist who has worked on Google Search for over 17 years, wrote how he had gone to the office to finish off some work at 4 a.m., only to find out his entry badge didn't work.
Economists predict the US economy may enter a recession this year amid a highly uncertain global economic outlook. Big tech companies have started to feel the squeeze.
In the past, economic downturns have shut off the funding taps for AI research. These periods are called AI winters." But this time we're seeing something totally different. AI research is still extremely hot, and it's making big leaps in progress even as tech companies have started tightening their belts.
In fact, Big Tech is counting on AI to give it an edge.
AI research has swung violently in and out of fashion since the field was established in the late 1950s. There have been two AI winters: one in the 1970s and the other in the late 1980s to early 1990s. AI research has previously fallen victim to hype cycles of exaggerated expectations that it subsequently failed to live up to, says Peter Stone, a computer science professor at the University of Texas at Austin, who used to work on AI at AT&T Bell Labs (now known as Nokia Bell Labs) until 2002.
For decades, Bell Labs was considered the hot spot for innovation, and its researchers won several Nobel Prizes and Turing Awards, including Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. The lab's resources were cut as management started pushing for more immediate returns based on incremental technological changes, and ultimately it failed to capitalize on the internet revolution of the early 2000s, Jon Gertner writes in his book The Idea Factory: Bell Labs and the Great Age of American Innovation.
The previous downturns happened after the hottest AI techniques of the day failed to show progress and were unreliable and difficult to run, says Stone. Government agencies in the US and the UK that had provided funding for AI research soon realized that this approach was a dead end and cut off funding.
Today, AI research is having its main character" moment. There may be an economic downturn, but AI research is still exciting. We are still continuing to see regular rollouts of systems which are pushing back the frontiers of what AI can do," says Michael Wooldridge, a computer science professor at the University of Oxford and author of the book A Brief History of AI.
This is a far cry from the field's reputation in the 1990s, when Wooldridge was finishing his PhD. AI was still seen as a weird, fringe pursuit; the wider tech sector viewed it in a similar way to how established medicine views homeopathy, he says.
Today's AI research boom has been fueled by neural networks, which saw a big breakthrough in the 1980s and work by simulating the patterns of the human brain. Back then, the technology hit a wall because the computers of the day weren't powerful enough to run the software. Today we have lots of data and extremely powerful computers, which makes the technique viable.
New breakthroughs, such as the chatbot ChatGPT and the text-to-image model Stable Diffusion, seem to come every few months. Technologies like ChatGPT are not fully explored yet, and both industry and academia are still working out how they can be useful, says Stone.
Instead of a full-blown AI winter, we are likely to see a drop in funding for longer-term AI research and more pressure to make money using the technology, says Wooldridge. Researchers in corporate labs will be under pressure to show that their research can be integrated into products and thus make money, he adds.
That's already happening. In light of the success of OpenAI's ChatGPT, Google has declared a code red" threat situation for its core product, Search, and is looking to aggressively revamp Search with its own AI research.
Stone sees parallels to what happened at Bell Labs. If Big Tech's AI labs, which dominate the sector, turn away from deep, longer-term research and focus too much on shorter-term product development, exasperated AI researchers may leave for academia, and these big labs could lose their grip on innovation, he says.
That's not necessarily a bad thing. There are a lot of smart people looking for jobs at the moment. Venture capitalists are looking for new startups to invest in as crypto fizzles out, and generative AI has shown how the technology can be made into products.
This moment presents the AI sector with a once-in-a-generation opportunity to play around with the potential of new technology. Despite all the gloom around the layoffs, it's an exciting prospect.
Before you go... We've put together a brand new series of reports inspired by MIT Technology Review's marquee 10 Breakthrough Technologies. The first one, which will be out later this week is about how industrial design and engineering firms are using generative AI is set to come out soon. Sign up to get notified when it's available.
Deeper LearningAI is bringing the internet to submerged Roman ruins
Over 2,000 years ago, Baiae was the most magnificent resort town on the Italian peninsula. Wealthy statesmen were drawn to its natural springs, building luxurious villas with heated spas and mosaic-tiled thermal pools. But over the centuries, volcanic activity submerged this playground for the Roman nobility-leaving half of it beneath the Mediterranean. Today it is a protected marine area and needs to be monitored for damage caused by divers and environmental factors. But communication underwater is extremely difficult.
Under the sea: Italian researchers think they've figured out a new way to bring the internet underwater: AI and algorithms, which adjust network protocols according to sea conditions and allow the signal to travel up to two kilometers. This could help researchers better study the effects of climate change on marine environments and monitor underwater volcanoes. AI research can be pretty abstract, but this is a nice, practical example of how the technology can be useful. Read more from Manuela Callari.
Bits and BytesHow OpenAI used low-paid Kenyan workers to make ChatGPT less toxic
OpenAI used a Kenyan company called Sama to train its popular AI system, ChatGPT, to generate safer content. Low-paid workers sifted through endless amounts of graphic and violent content on topics such as child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. This story is a good reminder of all the deeply unpleasant work humans have to do behind the scenes to make AI systems safe. (Time)
Inside CNET's AI-powered SEO money machine
Tech news site CNET has started using ChatGPT to write news articles. To absolutely nobody's surprise, the site has already had to issue corrections for factual errors in those articles. The Verge looked at why CNET decided to use AI to write stories, and it's a sad tale of what happens when private equity collides with journalism. (The Verge)
China could offer a model for deepfake regulation
Governments have been reluctant to regulate deepfakes over fears that such efforts may curtail free speech. The Chinese government, which isn't so troubled by that risk, thinks it has a solution. The country has adopted rules that require deepfakes to have the subject's consent and bear watermarks, for example. Other countries will be watching and taking notes. (The New York Times)
Nick Cave thinks a song written by ChatGPT in his style sucks
Perfection. No comments. Chef's kiss. (The Guardian)