Article 72CZA AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

by
Caiwei Chen, Will Douglas Heaven, Michelle Kim, Ja
from MIT Technology Review on (#72CZA)

If the past 12 months have taught us anything, it's that the AI hype train is showing no signs of slowing. It's hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn't a thing.

If that's left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.

Make sure you take the time to brace yourself for what promises to be another bonkers year.

-Rhiannon Williams

1. SuperintelligenceWrapped_embed_0000_superintelligence.png?w=980

As long as people have beenhypingAI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian ordystopianconsequences for humanity. Superintelligence" is that latest hot term. Metaannouncedin July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company's competitors to join.

In December, Microsoft's head of AIfollowed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI,you'd be right!While it's conceivable that these sorts of technologies will be feasible in humanity's long run, the question is reallywhen, and whether today's AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings.-James O'Donnell

2. Vibe codingWrapped_embed_0001_vibe-coding.png?w=980

Thirty years ago, Steve Jobs said everyone in America shouldlearn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks tovibe coding-a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models' coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique's biggest champions aren't letting those minor details stand in their way. Also-it sounds fun!- Rhiannon Williams

3. Chatbot psychosisWrapped_embed_0002_rorshach.png?w=980

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although chatbot psychosis" is not a recognized medical term, researchers are paying close attention to thegrowing anecdotal evidencefrom users who say it's happened to them or someone they know. Sadly, theincreasing number of lawsuitsfiled against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology's potentially deadly consequences.-Rhiannon Williams

4. ReasoningWrapped_embed_0003_reasoning.png?w=980

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.

A month later, the Chinese firm DeepSeektook everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could reason" reignited old debates abouthow smart LLMs really are and how they really work. Like artificial intelligence" itself, reasoning" is technical jargon dressed up with marketing sparkle. Choo choo!-Will Douglas Heaven

5. World modelsWrapped_embed_0004_world-model.png?w=980

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don't have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind's LLMs).

World models-a broad church encompassing various technologies-aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind's Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li's startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta's former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing.-Will Douglas Heaven

6. HyperscalersWrapped_embed_0007_hyperscale.png?w=980

Have you heard about all the people saying no thanks,we actually don't want a giant data centerplopped in our backyard? The data centers in question-which tech companies want to built everywhere, includingspace-are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world's best chips hum away training and fine-tuning models, and they're built to be modular and grow according to needs.

It's been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers willraise their power bills. Such buildingsgenerallystruggleto run on renewable energy. And they don't tend to create all that manyjobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community.-James O'Donnell

7. BubbleWrapped_embed_0006_bubble.png?w=980

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They're pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raisingcircular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, mightnot turn a profitfor years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.

Most organizations using AI aren't yet seeing the payoff, and AI work slop is everywhere. There's scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strongrevenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dreamever burst?-Michelle Kim

8. AgenticWrapped_embed_0005_Agentic.png?w=980

This year,AI agentswere everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly agentic," a vague term if ever there was one. No matter that it's virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it's supposed to do-it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic!-Rhiannon Williams

9. DistillationWrapped_embed_0008_distill.png?w=980

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.

The key to R1's success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher's knowledge.-Caiwei Chen

10. SycophancyWrapped_embed_0009_sycophancy.png?w=980

As people across the world spend increasing amounts of timeinteracting with chatbotslike ChatGPT, chatbot makers are struggling to work out the kind of tone and personality" the models should adopt. Back in April, OpenAI admitted it'd struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4otoo sycophantic. Having it suck up to you isn't just irritating-it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything-yes, everything-LLMs produce with a pinch of salt. -Rhiannon Williams

11. SlopWrapped_embed_0010_slop.png?w=980

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it's slop." The word itself is old (think pig feed), but slop" is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, fromfake biographiestoshrimp Jesusimages tosurreal human-animal hybridvideos.

But people are also having fun with it. The term's sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think work slop" or friend slop." As the hype cycle resets, slop" marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression.-Caiwei Chen

12. Physical intelligence Wrapped_embed_0011_physical-intelligence.png?w=980

Did you come across the hypnotizingvideofrom earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world.

It's true that robots have been able to learn new tasksfasterthan ever before, everywhere fromoperating roomstowarehouses. Self-driving-car companies have seen improvements in how theysimulatethe roads, too. That said, it's still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks toremote operators in the Philippines.

The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That's why the robot company Figure suggested in September that it wouldpay peopleto film themselves in their apartments doing chores. Would you sign up?-James O'Donnell

13. Fair use Wrapped_embed_0012_scales.png?w=980

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is fair use"-a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn't compete with the original. Courts are starting to weigh in. In June, Anthropic's training of its AI model Claude on a library of books was ruled fair use because the technology was exceedingly transformative."

That same month, Meta scored asimilar win, but only because the authors couldn't show that the company's literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed asplashy dealwith OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney's franchises. Meanwhile, governments around the world arerewriting copyright rulesfor the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question,it depends.-Michelle Kim

14.GEOWrapped_embed_0013_GEO.png?w=980

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO-generative engine optimization-as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that's in AI-enhanced search results like Google'sAI Overviewsor within responses from LLMs. It's no wonder they're freaked out. We already know that news companies have experienced acolossal dropinsearch-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It's time to adapt or die.-Rhiannon Williams

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments