ChatGPT rejected 250,000 election deepfake requests
A lot of people tried to use OpenAI's DALL-E image generator during the election season, but the company said that it was able to stop them from using it as a tool to create deepfakes. ChatGPT rejected over 250,000 requests to generate images with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance and Governor Walz, OpenAI said in a new report. The company explained that it's a direct result of a safety measure it previously implemented so that ChatGPT would refuse to generate images with real people, including politicians.
OpenAI has been preparing for the US presidential elections since the beginning of the year. It laid out a strategy that was meant to prevent its tools from being used to help spread misinformation and made sure that people asking ChatGPT about voting in the US are directed to CanIVote.org. OpenAI said 1 million ChatGPT responses directed people to the website in the month leading up to election day. The chatbot also generated 2 million responses on election day and the day after, telling people who ask it for the results to check Associated Press, Reuters and other news sources. OpenAI made sure that ChatGPT's responses "did not express political preferences or recommend candidates even when asked explicitly," as well.
Of course, DALL-E isn't the only AI image generator out there, and there are plenty of election-related deepfakes going around social media. One such deepfake featured Kamala Harris in a campaign video altered so that she'd say things she didn't actually say, such as "I was selected because I am the ultimate diversity hire."
This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-rejected-250000-election-deepfake-requests-170037063.html?src=rss