Malicious Actors Are Still Using ChatGPT to Influence Elections: OpenAI Says
- Just a few months ago, OpenAI revealed that it took down some accounts run by Iranian groups since they were using ChatGPT to create content that would influence elections.
- Now, the company has released a 54-page report, claiming that bad actors are still using its platform to influence elections.
- 2024 is a major election year and many threat groups from China, Russia, and Iran have been found trying to sabotage it.
OpenAI has revealed that some threat actors are using ChatGPT to try and disrupt the upcoming elections.
On Wednesday, it posted a 54-page report where it mentioned that so far it has managed to disrupt 20 illegal operations from around the world, stopping them from using the platform.
Each group had different reasons to use the platform. Some used it to generate fake content from websites and some used it for creating social media posts that were then posted from fake accounts.The timing of the report is important considering the US election is just a month away. Not just in the US, many other major elections are being held this year globally, affecting a total of 4 billion people in 40 countries.
Now that AI is so easily accessible, the concern is it will be used to spread misinformation, which has always been an issue with elections.Creating deepfakes in bulk is super easy now. According to some data from Clarity (a machine learning firm), the number of deepfakes created has increased by 900% year over year.
Examples of AI Being Used to Create Misinformation- For starters, in May some Israeli groups were found using ChatGPT to generate social media comments about elections in India on X.
- In July the company banned some groups from Rwanda for using ChatGPT to generate election-related content and posting them on X.
- A similar operation was also run against the European Parliament before elections in France.
- Then in late August this year, OpenAI found an Iranian group was using its platform to generate both long-format articles and short social media posts. These posts were used to trick the voters, turn them against the candidates, and weaken their trust in the authorities.
Thankfully, none of the posts in any of the campaigns got much engagement. But that doesn't mean that the influence of AI on elections should be ignored.
AI is getting out of hand and OpenAI is not the only tool that can be exploited. For example, X launched Grok 2 and Grok Mini which come with an AI-enabled image generator.
Soon after the launch, many users started creating scandalous images of US politicians. For instance, there was a picture of Barack Obama doing drugs and another of Donald Trump firing guns.
The worst part is Elon Musk doesn't believe much in imposing regulations. He has always been a staunch supporter of free speech and doesn't want much government interference on his platforms.
While on the surface this might sound good, we have to understand without the necessary guardrails AI can be disastrous. Today it was some harmless users creating scandalous images of politicians for a joke but tomorrow it can be an organized crime group creating deepfakes to frame someone with fake evidence or lie to the public to fuel violence, hate crimes, and general civil disorder.
The post Malicious Actors Are Still Using ChatGPT to Influence Elections: OpenAI Says appeared first on The Tech Report.