Microsoft Warns That China Wants To Use AI To Disrupt Elections; But Basically Ignores Its Failures To Disrupt The Taiwanese Election
I'm not sure we should welcome in our new AI-powered robot overlords determining how elections come about just yet.
The media keeps telling me that deep fakes and generative AI are going to throw all of the important elections this year into upheaval. And maybe it's true, but to date, we've seen very little evidence to support anything serious. There are a lot of questions this year about the impact that generative AI tools will have on elections, but the predictions of the power of these tools still remain greatly exaggerated.
The latest is the Guardian reporting that China is looking to use AI to disrupt elections in the US, South Korea, and India" based on warnings from Microsoft:
China will attempt to disrupt elections in the US, South Korea and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned.
The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company's threat intelligence team published on Friday.
As populations in India, South Korea and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections," the report reads.
Microsoft said that at a minimum" China will create and distribute through social media AI-generated content that benefits their positions in these high-profile elections".
And, I mean, anything's possible, and it's certainly good for companies and individuals alike to be on the lookout, but remember, one of the most important elections for China already happened earlier this year. The election in Taiwan. And it didn't turn out the way that China wanted. At all.
That doesn't mean China won't continue to try to interfere in foreign elections, because of course it will. But it should, at the very least, lead to questions about just how effective these kinds of campaigns to manipulate elections can be.
I mean, part of Microsoft's announcement was that China tried to use AI to influence the Taiwanese election, and it didn't seem to have much of an impact.
Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.
A Beijing-backed group called Storm 1376, also known as Spamouflage or Dragonbridge, was highly active during the Taiwanese election. Its attempts to influence the election included posting fake audio on YouTube of the election candidate Terry Gou - who had bowed out in November - endorsing another candidate. Microsoft said the clip was likely AI generated". YouTube removed the content before it reached many users.
The Beijing-backed group pushed a series of AI-generated memes about the ultimately successful candidate, William Lai - a pro-sovereignty candidate opposed by Beijing - that levelled baseless claims against Lai accusing him of embezzling state funds. There was also an increased use of AI-generated TV news anchors, a tactic that has also been used by Iran, with the anchor" making unsubstantiated claims about Lai's private life including fathering illegitimate children.
Looking at Microsoft's actual announcement, there's surprisingly little discussion of why the attempts in Taiwan failed. It certainly talks about increased efforts, but not the rate of success.
There's no reason not to be careful and to be thinking about these threats. But it seems like a much more interesting bit of research would have been to look at why this was so ineffective in the Taiwanese election, and if there were lessons to learn from that, rather than just hyping up the fear, uncertainty, and doubt about future elections.
Of course, if you're still super worried, well, we've got a great brainstorming tool to check out...