What the US can learn from the role of AI in other elections
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
If it's not broken, don't fix it. That's the approach bad state actors seem to have taken when it comes to how they mess with elections around the world.
When the generative-AI boom first kicked off, one of the biggest concerns among pundits and experts was that hyperrealistic AI deepfakes could be used to influence elections. But new research from the Alan Turing Institute in the UK shows that those fears might have been overblown. AI-generated falsehoods and deepfakes seem to have had no effect on election results in the UK, France, and the European Parliament, as well as other elections around the world so far this year.
Instead of using generative AI to interfere in elections, state actors such as Russia are relying on well-established techniques-such as social bots that flood comment sections-to sow division and create confusion, says Sam Stockwell, the researcher who conducted the study.Read more about it from me here.
But one of the most consequential elections of the year is still ahead of us.In just over a month, Americans will head to the polls to choose Donald Trump or Kamala Harris as their next president. Are the Russians saving their GPUs for the US elections?
So far, that does not seem to be the case, says Stockwell, who has been monitoring viral AI disinformation around the US elections too. Bad actors are still relying on these well-established methods that have been used for years, if not decades, around things such as social bot accounts that try to create the impression that pro-Russian policies are gaining traction among the US public," he says.
And when they do try to use generative-AI tools, they don't seem to pay off,he adds.For example, one information campaign with strong ties to Russia, called Copy Cop, has been trying to use chatbots to rewrite genuine news stories on Russia's war in Ukraine to reflect pro-Russian narratives.
The problem? They're forgetting to remove the prompts from the articles they publish.
In the short term, there are a few things that the US can do to counter more immediate harms,says Stockwell.For example, some states, such as Arizona and Colorado, are already conducting red-teaming workshops with election polling officials and law enforcement to simulate worst-case scenarios involving AI threats on Election Day. There also needs to be heightened collaboration between social media platforms, their online safety teams, fact-checking organizations, disinformation researchers, and law enforcement to ensure that viral influencing efforts can be exposed, debunked, and taken down, says Stockwell.
But while state actors aren't using deepfakes, that hasn't stopped the candidates themselves.Most recently Donald Trump has used AI-generated images implying that Taylor Swift had endorsed him. (Soon after, the pop star offered her endorsement to Harris.)
Earlier this year I wrote a piece exploring thebrave new world of hyperrealistic deepfakesand what the technology is doing to our information landscape. As I wrote then, there is a real risk of creating so much skepticism and distrust in our information landscape that bad actors, or opportunistic politicians, can take advantage of this trust vacuum and lie about the authenticity of real content. This is called the liar's dividend."
There is an urgent need for guidelines on how politicians use AI.We currently lack accountability or clear red lines as to how political candidates can use AI in an ethical manner within the election context, says Stockwell. The more we see political candidates carry out practices like sharing AI-generated adverts without labels or or making accusations that other candidates' activities are AI-generated, the more it becomes normalized, he adds. And everything we've seen so far suggests that these elections are only the beginning.
Now read the rest of The AlgorithmDeeper LearningAI models let robots carry out tasks in unfamiliar environments
It's tricky to get robots to do things in environments they've never seen before. Typically, researchers need to train them on new data for every new place they encounter, which can become very time-consuming and expensive.
Now researchers have developed a series of AI models that teach robots to complete basic tasks in new surroundings without further training or fine-tuning. The five AI models, called robot utility models (RUMs), allow machines to complete five separate tasks-opening doors and drawers, and picking up tissues, bags, and cylindrical objects-in unfamiliar environments with a 90% success rate. This approach could make it easier and cheaper to deploy robots in our homes.Read more from Rhiannon Williams here.
Bits and BytesThere are more than 120 AI bills in Congress right now
US policymakers have an everything everywhere all at once" approach to regulating artificial intelligence, with bills that are as varied as the definitions of AI itself.
(MIT Technology Review)
Google is funding an AI-powered satellite constellation to spot wildfires faster
The full FireSat system should be able to detect tiny fires anywhere in the world-and provide updated images every 20 minutes. (MIT Technology Review)
A project analyzing human language usage shut down because generative AI has polluted the data"
Wordfreq, an open-source project that scraped the internet to analyze how humans use language, found that post-2021, there is too much AI-generated text online to make any reliable analyses.(404 Media)
Data center emissions are probably 662% higher than Big Tech claims
AI models take a lot of energy to run and train, and tech companies have emphasized their efforts to counter their emissions. There is, however, a lot of creative accounting" happening when it comes to calculating carbon footprints, and new analysis shows that data center emissions from these companies is likely 7.62 times higher than officially reported.
(The Guardian)