U.S. Agency Deems AI Cloned Voice Robocalls Unlawful
In a significant development, the United States Federal Communications Commission (FCC) has officially declared AI-cloned voice robocalls illegal. This decision comes in response to a recent incident where a fake robocall featuring an AI-generated voice impersonated President Joe Biden.
The idea of the call was to influence voters in New Hampshire's Democratic primary election.
FCC Rules A.I. Robocall IllegalFCC Chair Jessica Rosenworcel announced the declaratory ruling, emphasizing that it provides state attorneys general with enhanced tools to pursue the entities responsible for these deceptive robocalls.
The use of AI-generated voices in unsolicited robocalls has become a growing concern.
This becomes vivid as bad actors exploit this technology to extort vulnerable individuals, imitate celebrities, and spread misinformation during crucial events such as elections.
Previously, state attorneys general could address the consequences of unwanted AI-voice-generated robocalls, but this new action directly targets the technology behind such deceptive practices.
In a related development, New Hampshire Attorney General John Formella traced a fake Biden robocall back to Texas-based Life Corp, operated by Walter Monk.The Attorney General has issued a cease-and-desist letter to the company, and an ongoing criminal investigation is in progress.
Meanwhile, Democratic FCC Commissioner Geoffrey Starks expressed concern about the emerging threat posed by generative AI in voter suppression schemes. The heightened trustworthiness of fake robocalls, facilitated by voice cloning, has added a new layer of complexity to tackling deceptive practices in communication.
The recent announcement aligns with the FCC's previous actions against illegal robocalls. A significant example is a $5.1 million fine imposed in 2023 on conservative campaigners who made over 1,100 illegal robocalls before the 2020 U.S. election.
These calls sought to discourage voting by spreading false information about the consequences of voting by mail.
As such, the FCC's continued efforts highlight the commitment to protecting the integrity of communication channels and democratic processes in the face of evolving technological challenges.
Bad Actors Expand AI Set to ConsumersIn recent months, many individuals have been exposed to the darker side of generative AI, mainly through the emergence of kidnapping scams.
These scams involve perpetrators using AI to replicate convincing voices of family members in distress, a phenomenon commonly known as the imposter grandchild scam.
Primarily targeting older adults, the synthetic voice of a distressed grandchild is employed to solicit funds for supposed dangerous situations.
The prevalence of such scams prompted Congress to direct the Federal Trade Commission (FTC) to investigate AI-related schemes targeting seniors as of May last year.
As scammers' tactics evolve, consumers will likely remain vulnerable. The increasing sophistication of generative AI opens the door for bad actors to pursue larger and faster paydays.
In anticipation of this trend, the upcoming months may witness a surge in A.I. voice cloning scams targeting executives and financial services firms.
Instances of scams aiming to deceive bank employees into releasing customer funds have already been observed. In some cases, scammers might pose as family members with a flat tire and request financial assistance.
Unfortunately, the advancement of generative AI enables perpetrators to incorporate realistic background noises, such as the sounds of cars on the road, making these scams even more deceptive.
So, Americans should be vigilant against the rising threat of highly convincing generative A.I. robocalls.
The post U.S. Agency Deems AI Cloned Voice Robocalls Unlawful appeared first on The Tech Report.