Three things to know about how the US Congress might regulate AI
This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday,sign up here.
Last week, Senate majority leader Chuck Schumer (a Democrat from New York) announced hisgrand strategy for AI policymakingat a speech in Washington, DC, ushering in what might be a new era for US tech policy. He outlined some key principles for AI regulation and argued that Congress ought to introduce new laws quickly.
Schumer's plan is a culmination of many other, smaller policy actions. On June 14, Senators Josh Hawley (a Republican from Missouri) and Richard Blumenthal (a Democrat from Connecticut) introduced a bill that wouldexclude generative AI from Section 230(the law that shields online platforms from liability for the content their users create). Last Thursday, the House science committeehosted a handful of AI companiesto ask questions about the technology and the various risks and benefits it poses. House Democrats Ted Lieu and Anna Eshoo, with Republican Ken Buck, proposed a National AI Commission tomanage AI policy, and a bipartisan group of senators suggested creating a federal office to encourage, among other things,competition with China.
Though this flurry of activity is noteworthy, US lawmakers arenot actually starting from scratch on AI policy. You're seeing a bunch of offices develop individual takes on specific parts of AI policy, mostly that fall within some attachment to their preexisting issues," says Alex Engler, a fellow at the Brookings Institution. Individual agencies likethe FTC,theDepartment of Commerce, and theUS Copyright Officehave been quick to respond to the craze of the last six months, issuing policy statements, guidelines, and warnings about generative AI in particular.
Of course, we never really know whether talk means action when it comes to Congress. However, US lawmakers' thinking about AI reflects some emerging principles. Here are three key themes in all this chatter that you should know to help you understand where US AI legislation could be going.
- The US is home to Silicon Valley and prides itself on protecting innovation.Many of the biggest AI companies are American companies, and Congress isn't going to let you, or the EU, forget that! Schumer called innovation the north star" of US AI strategy, meaning regulators will probably be calling on tech CEOs to ask how they'd like to be regulated. It's going to be interesting watching the tech lobby at work here. Some of this language arose in response to the latest regulations from the European Union, which some tech companies andcritics say will stifle innovation.
- Technology, and AI in particular, ought to be aligned with democratic values."We're hearing this from top officials like Schumer andPresident Biden. The subtext here is the narrative that US AI companies are different from Chinese AI companies. (New guidelines in China mandate thatoutputs of generative AI must reflect communist values.") The US is going to try to package its AI regulation in a way that maintains the existing advantage over the Chinese tech industry, while also ramping up its production andcontrol of the chips that power AI systemsand continuing its escalating trade war.
- One big question: what happens to Section 230.A giant unanswered question for AI regulation in the US is whether we will or won't see Section 230 reform. Section 230 is a 1990s internet law in the US that shields tech companies from being sued over the content on their platforms. But should tech companies have that same get out of jail free' pass for AI-generated content? This is a big question, and it would require that tech companies identify and label AI-made text and images, which is a massive undertaking. Given that the Supreme Courtrecently declined to rule on Section 230, the debate has likely been pushed back down to Congress. Whenever legislators decide if and how the law should be reformed, it could have a huge impact on the AI landscape.
So where is this going? Well, nowhere in the short-term, as politicians skip off for their summer break. But starting this fall, Schumer plans to kick offinvite-only discussion groupsin Congress to look at particular parts of AI.
In the meantime, Engler says we might hear some discussions about the banning of certain applications of AI, like sentiment analysis or facial recognition, echoing parts of the EU regulation. Lawmakers could also try to revive existing proposals for comprehensive tech legislation-for example, the Algorithmic Accountability Act.
For now, all eyes are on Schumer's big swing. The idea is to come up with something so comprehensive and do it so fast. I expect there will be a pretty dramatic amount of attention," says Engler.
What else I'm reading- Everyone is talking about Bidenomics," meaning the current president's specific brand of economic policy. Tech is at the core of Bidenomics, with billions upon billions of dollars being poured into the industry in the US. For a glimpse of what that means on the ground, it's well worth readingthis storyfrom the Atlantic about a new semiconductor factory coming to Syracuse.
- AI detection tools try to identify whether text or imagery online was made by AI or by a human. But there's a problem: they don't work very well. Journalists at theNew York Times messed around with various toolsand ranked them according to their performance. What they found makes for sobering reading.
- Google's ad business is having a tough week.New research published by the Wall Street Journalfound that around 80% of Google ad placements appear to break their own policies, which Google disputes.
We may be more likely to believe disinformation generated by AI, according tonew research coveredby my colleague Rhiannon Williams. Researchers from the University of Zurich found that people were 3% less likely to identify inaccurate tweets created by AI than those written by humans.
It's only one study, but if it's backed up by further research, it's a worrying finding. As Rhiannon writes, Thegenerative AI boomputs powerful, accessible AI tools in the hands of everyone, including bad actors. Models like GPT-3 can generate incorrect text that appears convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns."