Deepfake “Amazon workers” are sowing confusion on Twitter. That’s not the problem.
The news: Ahead of a landmark vote that could lead to the formation of the first-ever labor union at a US-based Amazon warehouse, new Twitter accounts purporting to be Amazon employees started appearing. The profiles used deepfake photos as profile pictures and were tweeting some pretty laughable, over-the-top defenses of Amazon's working practices. They didn't seem real, but they still led to confusion among the public. Was Amazon really behind them? Was this some terrible new anti-union social media strategy? The answer is almost certainly not-but the use of deepfakes in this context points to a more concerning trend overall.
The backstory: There's a reason these new deepfake profiles seemed familiar. In 2018, Amazon began a very real program to convince the public that it was treating its warehouse workers just fine. It set up computer stations in those warehouses and created Twitter accounts for a small group of employees, known as Amazon FC Ambassadors," who could tweet during paid hours about how much they loved their job. The plan backfired, and led to the creation of numerous parody ambassador accounts on Twitter. Amazon scaled back the program shortly after, and many of the original real accounts were suspended or shut down, says Aric Toler, the head of training and research efforts for the investigative journalism site Bellingcat.
The latest: On Monday, the new batch of ambassador accounts cropped up-or at least, so it seemed. But when people started investigating, many pointed out that some had profile pictures with the telltale signs of deepfake faces, like warped earrings and blurry backgrounds. That's when things got confusing.
People quickly latched onto the idea that Amazon itself was behind the new deepfake accounts as part of an anti-union social media campaign. The company later told New York Times reporter Karen Weise that it was not. Toler, who tracked the original accounts, believes Amazon is telling the truth. The original ambassador accounts were all registered under Amazon email addresses and posted using Sprinklr. The new deepfake accounts, which have since been suspended, were registered using Gmail and posted via the Twitter web app. Plus, the content of the tweets was clearly intended to be humorous. These accounts were likely just more parodies created to mock Amazon.
Does this matter? In this particular case, probably not. It's of course pretty harmless," says Toler. It's Amazon. They're fine." But it reveals a more concerning problem: that deepfake accounts can be deployed in a coordinated way on social media or elsewhere for far more sinister purposes. The most malicious version of this would be state actors, warns Toler.
In fact there have already been several high-profile instances in which deepfake photos have been used in damaging disinformation campaigns. In December 2019, Facebook identified and took down a network of over 900 pages, groups, and accounts, including some with deepfaked profile pictures, associated with the far-right outlet the Epoch Times, which is known to engage in misinformation tactics. In October 2020, a fake intelligence" document distributed among President Trump's circles, which became the basis for numerous conspiracy theories surrounding Hunter Biden, was also authored by a fake security analyst with a deepfaked profile image.
Toler says deepfake faces have become a trend in his line of work as an open-source investigator into suspicious online activity, especially since the launch of ThisPersonDoesNotExist.com, a website that serves up a new AI-generated face with every refresh. There's always a mental checklist that you go through whenever you find anything," he says. The first question is Is this person real or not?' Which was a question we didn't really have five years ago."
How big a threat is this? At the moment, Toler says, the use of deepfake faces hasn't had a big impact on his work. It's still relatively easy for him to identify when a profile image is a deepfake, just as it is when the photo is a stock image. The most difficult scenario is when the image is of a real person pulled from a private social media account that isn't indexed on image search engines.
A growing awareness of deepfakes has also primed people to scrutinize the media they see more carefully, says Toler, as evidenced by how quickly people caught on to the fakery of the Amazon accounts.
But Sam Gregory, the program director of human rights nonprofit Witness, says this shouldn't lull us into a false sense of security. Deepfakes are constantly getting better, he says: I think people have a little bit too much confidence that it's always going to be possible to detect them."
A hyper-awareness of deepfakes could also lead people to stop believing in real media, which could have equally dire consequences, such as undermining the documentation of human rights abuses.
What should we do? Gregory encourages social media users to avoid fixating on whether an image is a deepfake or not. Oftentimes, that's just a tiny part of the puzzle," he says. The giveaway is not that you've somehow interrogated the image. It's that you look at the account and it was created a week ago, or it's a journalist who claims to be a journalist, but they've never written anything else that you could find on a Google search."
These investigative tactics are much more robust to advances in deepfake technology. The advice rings true for the Amazon case as well. It was through checking the accounts' emails and tweet details that Toler ultimately determined they were fake-not by scrutinizing the profile images.