Article 6Z5PK Here’s how deepfake vishing attacks work, and why they can be hard to detect

Here’s how deepfake vishing attacks work, and why they can be hard to detect

by
Dan Goodin
from Ars Technica - All content on (#6Z5PK)

By now, you've likely heard of fraudulent calls that use AI to clone the voices of people the call recipient knows. Often, the result is what sounds like a grandchild, CEO, or work colleague you've known for years reporting an urgent matter requiring immediate action, saying to wire money, divulge login credentials, or visit a malicious website.

Researchers and government officials have been warning of the threat for years, with the Cybersecurity and Infrastructure Security Agency saying in 2023 that threats from deepfakes and other forms of synthetic media have increased exponentially." Last year, Google's Mandiant security division reported that such attacks are being executed with uncanny precision, creating for more realistic phishing schemes."

Anatomy of a deepfake scam call

On Wednesday, security firm Group-IB outlined the basic steps involved in executing these sorts of attacks. The takeaway is that they're easy to reproduce at scale and can be challenging to detect or repel.

Read full article

Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments