Four lessons from 2023 that tell us where AI regulation is going
This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
In the US and elsewhere, 2023 was a blockbuster year for artificial intelligence and AI regulation, and this next year is guaranteed to bring even more action. On January 5, I published a story with my colleagues Melissa Heikkila and Zeyi Yang that lays out what we should expect in the coming 12 months in AI policy around the world.
Most broadly, we are likely to see the strategies that emerged last year continue, expand, and begin to be implemented. For example, following President Biden's executive order, various US government agencies may outline new best practices but empower AI companies to police themselves. And across the pond, companies and regulators will begin to grapple with Europe's AI Act and its risk-based approach. It certainly won't be seamless, and there's bound to be a lot of discussion about how these new laws and policies actually work in practice.
While writing this piece, I took some time to reflect on how we got here. I think stories about technologies' rise are worthy of reflective examination-they can help us better understand what might happen next. And as a reporter, I've seen patterns emerge in these stories over time-whether it's with blockchain, social media, self-driving cars, or any other fast-developing, world-changing innovation. The tech usually moves much faster than regulation, with lawmakers increasingly challenged to stay up to speed with the technology itself while devising new ways to craft sustainable, future-proof laws.
In thinking about the US specifically, I'm not sure what we're experiencing so far is unprecedented, though certainly the speed with which generative AI has launched into our lives has been surprising. Last year, AI policy was marked by Big Tech power moves, congressional upskilling and bipartisanship (at least in this space!), geopolitical competition, and rapid deployment of nascent technologies on the fly.
So what did we learn? And what is around the corner? There's so much to try to stay on top of in terms of policy, but I've broken down what you need to know into four takeaways.
1. The US isn't planning on putting the screws to Big Tech. But lawmakers do plan to engage the AI industry.
OpenAI's CEO, Sam Altman, first started his tour de Congress last May, six months after the bombshell launch of ChatGPT. He met with lawmakers at private dinners and testified about the existential threats his own technology could pose to humanity. In a lot of ways, this set the tone for how we've been talking about AI in the US, and it was followed by Biden's speech on AI, congressional AI insight forums to help lawmakers get up to speed, and the release of more large language models. (Notably, the guest list for these AI insight forums skewed heavily toward industry.)
As US lawmakers began to really take on AI, it became a rare (if small) area of bipartisanship on the Hill, with legislators from both parties calling for more guardrails around the tech. At the same time, activity at the state level and in the courts increased, primarily around user protections like age verification and content moderation.
As I wrote in the story, Through this activity, a US flavor of AI policy began to emerge: one that's friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently." The culmination of all this was Biden's executive order at the end of October, which outlined a distributed approach to AI policy, in which different agencies craft their own rules. It (perhaps unsurprisingly) will rely quite heavily on buy-in from AI companies.
Next year, we can expect some new regulations to build on all this. As we wrote in our story today, Congress is looking to draft new laws and will consider existing bills on recommendation algorithms, data privacy, and transparency that will complement Biden's executive order. States, too, will be considering their own regulations.
2. It's not going to be easy to grapple with the harms and risks posed by AI.
While existential risk got the biggest headlines last year, human rights advocates and researchers frequently called out the harm that AI already on the market is causing right now, like perpetuating inaccuracy and bias. They warned that hyping existential risks would pull focus from dangerous realities, like medical AIs that disproportionately misdiagnose health issues in Black and brown patients.
As debates over how concerned we should be about the coming robot wars infiltrated dinner table chats and classrooms alike, agencies and local regulators started making declarations and issuing statements about AI, such as the joint statement in April from four federal agencies, including the FTC and CFPB, which warned that AI has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes." Just how those outcomes will be monitored or prevented, however, is far from clear at this point.
As for the tech industry itself, players are likely to continue to squabble with lawmakers over the riskiness of AI systems. The last-hour discussions over the EU AI Act were hung up on a fight over foundation models, and this debate will likely continue in different arenas this year, as will debates over what uses of AI should be considered high risk and who is responsible for managing those risks.
3. AI is the next frontier for techno-nationalism and global competition.
This past year also made clear that the US approach to AI is shaped by the desire to achieve and maintain a technological advantage over China. Meanwhile, the two countries continue to escalate their trade war over semiconductors, which provide the hardware necessary for AI models.
Beyond keeping an edge in technological prowess, the US wants to be a leader on tech regulation and compete with a regulation-happy Europe. Biden's executive order strategically dropped just days before the UK's AI Summit and before the final negotiations over the EU AI Act were set to take place.
4. Watch closely what happens in the US election and those around the world.
Of course, the US will have a big election in 2024, but so will many many other countries. In my last Technocrat of 2023, we talked about how generative AI and other media technologies have created acute concern about an onslaught of deceitful and inaccurate information. I'm particularly interested in watching how social media platforms and politicians alike address the new threat of political disinformation as a result of generative AI. As I wrote in a story a few months ago, researchers are already seeing a negative impact.
One thing at least is sure: the rapid release of generative AI to users in 2023 will affect 2024 elections, likely in a dramatic and unprecedented way. It's hard to really predict what may happen given how rapidly the technology is changing and how quickly users are pushing it in different and unexpected directions. So even if governments or social media companies, among others, do try to strengthen safeguards or create new policies, the way generative AI is actually used in 2024 will be critical in shaping future regulations.
No matter what, it's definitely going to be an interesting ride!
What I am reading this week- The New York Times is suing OpenAI on the grounds that it used its articles to train ChatGPT. It's one of the biggest stories over the past few weeks that you may have missed, and I was particularly interested in the similarity between some of the ChatGPT outputs and the NYT articles, as documented in the filing.
- Researchers at the Stanford Internet Observatory found thousands of examples of child sexual abuse material in one of the major data sets used to train generative AI. That data set has now been temporarily taken down.
- Smart cars are being weaponized by abusive partners as tools for surveillance and tracking, according to a new story by Kashmir Hill in the New York Times. In a world where almost everything has the ability to produce geolocation data, I'm afraid these sorts of stories will be more and more common.
My colleagues Melissa Heikkila and Will Douglas Heaven published a forward-thinking piece about what's to come for AI in 2024, and I figured you all would want a taste! They predict a year of customized chatbots, new advances in generative-AI video, AI-generated misinformation during elections, and multitasking robots. Definitely worth the read!