Article 6N6PX OpenAI Disrupts 5 Covert Influence Operations That Tried to Misuse Its AI Models for “Deceptive Activity”

OpenAI Disrupts 5 Covert Influence Operations That Tried to Misuse Its AI Models for “Deceptive Activity”

by
Krishi Chowdhary
from Techreport on (#6N6PX)
growtika-hJUl5BAhJec-unsplash-1200x675.j
  • In a blog published on Thursday, OpenAI revealed that it has disrupted 5 covert operations in the last 3 months.
  • 4 out of these 5 operations were state-backed and originated from China, Iran, and Russia. One was backed by a private firm in Israel.
  • These groups were using OpenAI tools to generate content and comments and debug websites and bots in order to spread their own propaganda.
  • Meta has also disrupted similar operations from Russia, Israel and China.

growtika-hJUl5BAhJec-unsplash-300x169.jp

On Thursday (May 30), OpenAI revealed that it has disrupted five covert influence operations in the last three months. These operations were trying to use its AI platform to support their illegal activities.

Miscreants tried to use OpenAI tools for creating fake social media platforms, generating content and comments, and debugging websites and bots.

In a blog post shared by the company, it revealed that most of these operations were state-backed and originated from Russia, China, and Iran. Only one was backed by a private company in Israel.

This is a huge matter of concern, especially at a moment when countries like the US, the UK, and India are hosting major elections. However, on the bright side, OpenAI assured that its services did not benefit the covert operations a huge amount in terms of reach and impact.

In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services." - OpenAI blog

Interestingly, this isn't the first time OpenAI has played spoilsport for malicious users. In February of this year, both OpenAI and Microsoft removed state-backed hacker groups from their apps.

What's also interesting is that Microsoft warned us last month that China will use AI-generated content to disrupt elections in the US, India, and South Korea.

Details about the Covert Operations

Here's a quick rundown of all the five operations OpenAI busted:

Russia

Two operations originated from Russia, one of which was the infamous campaign Doppelganger,' which is known for generating false content and comments.

The other one was a previously unreported operation called Bad Grammar,' which created a bot that would post short political comments on Telegram.

In other news, Germany, Czech Republic & the EU call out Russia for orchestrating cyber attacks.

China

The operation originating from China is named Spamouflage' and is known for being notoriously active on both Instagram and Facebook.

It researches social media activities and then creates text-based content in multiple languages to spread its propaganda.

Iran

Not much is known about the operation backed by the Iranian International Union of Virtual Media except for the fact that it uses AI to create content in multiple languages.

More Iran news: Iranian hacker group infiltrates UAE streaming services over fake Gaza war

Israel

One campaign came from an Israeli political campaign firm called Stoic' that used AI to create images about the atrocities happening in Gaza and then posted them on Instagram, X, and Facebook for users in Canada, the US, and Israel to see.

Together, these operations also created and spread content on Indian elections, the Russia-Ukraine war, western politics, and the Chinese government.

Speaking of the Israeli operation, the same group was also flagged by Meta just a day before OpenAI. Meta said that Stoic' was using its platform to manipulate political conversations online.

Similar operations from Russia and China were also disrupted by Meta.

Meta also removed 510 Facebook accounts, 11 pages, 1 Facebook group, and 32 Instagram accounts linked to the Stoic' group. The social media giant also issued a cease-and-desist letter against the group, demanding that they immediately stop activity that violates Meta's policies."

What Are AI Companies Doing to Prevent AI Misuse?

In a detailed report, OpenAI addressed the growing concern of the public over the misuse of AI and shared a list of steps it has taken to minimize the risk:

  • Certain internal safety standards have been imposed to detect such threat actors. For instance, OpenAI keeps track of how frequently its chatbot refuses to respond to a certain user's query. If it's refusing too frequently, it means that the user is trying to create something that goes against the company's policy.
  • OpenAI has AI-powered tools that simplify detection and analysis. What would usually take weeks or months to investigate now gets done within days.
  • The company has also shared detailed threat indicators with its industry peers and partners so that they can identify suspicious content quickly.
OpenAI added that human error helps them detect suspicious activity, too. The operators running these illegal campaigns are just as prone to making mistakes as anybody else.

For example, one of the operations accidentally posted the AI model's prompt refusal message instead of the actual content it generated.

In addition to such individual efforts, 20 tech companies, including Meta, OpenAI, Microsoft, and Google, also signed a pledge this year in February, promising to do their best to prevent AI from messing with the elections.

Read more: Tech companies come together to pledge AI safety in Seoul AI Summit

The post OpenAI Disrupts 5 Covert Influence Operations That Tried to Misuse Its AI Models for Deceptive Activity" appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments