Deepfakes Pose a Growing Danger, New Research Says
upstart writes:
Deepfakes Pose a Growing Danger, New Research Says:
A new report from VMware shows that cybersecurity professionals are seeing more deepfakes being used in cyber attacks.
Deepfakes use artificial intelligence to manipulate video and audio are make it seem like someone is saying or doing something that they're not. Deepfakes are increasingly being used in cyberattacks, a new report said, as the threat of the technology moves from hypothetical harms to real ones.
Reports of attacks using the face- and voice-altering technology jumped 13% last year, according to VMware's annual Global Incident Response Threat Report, which was released Monday. In addition, 66% of the cybersecurity professionals surveyed for this year's report said they had spotted one in the past year.
"Deepfakes in cyberattacks aren't coming," Rick McElroy, principal cybersecurity strategist at VMware, said in a statement. "They're already here."
Deepfakes use artificial intelligence to make it look as if a person is doing or saying things he or she actually isn't. The technology entered the mainstream in 2019, sparking fears it could convincingly re-create other people's faces and voices. Victims could see their likeness used for artificially created pornography and the technique could be used to sow political upheaval, experts warned.
While early deepfakes were largely easy to spot, the technology has since evolved and become much more convincing. In March, a video posted to social media appeared to show Ukrainian President Volodymyr Zelenskyy directing his soldiers to surrender to Russian forces. It was quickly denounced by Zelenskyy but showed the potential for harm posed by deepfakes.
Read more of this story at SoylentNews.