Article 56RFF New AI Dupes Humans into Believing Synthesized Sound Effects Are Real

New AI Dupes Humans into Believing Synthesized Sound Effects Are Real

by
Michelle Hampson
from IEEE Spectrum on (#56RFF)

MzI0MTAwOQ.jpeg

Imagine you are watching a scary movie: the heroine creeps through a dark basement, on high alert. Suspenseful music plays in the background, while some unseen, sinister creature creeps in the shadows... and then-BANG! It knocks over an object.

Such scenes would hardly be as captivating and scary without the intense, but perfectly timed sound effects, like the loud bang that sent our main character wheeling around in fear. Usually these sound effects are recorded by Foley artists in the studio, who produce the sounds using oodles of objects at their disposal. Recording the sound of glass breaking may involve actually breaking glass repeatedly, for example, until the sound closely matches the video clip.

In a more recent plot twist, researchers have created an automated program that analyzes the movement in video frames and creates its own artificial sound effects to match the scene. In a survey, the majority of people polled indicated that they believed the fake sound effects were real. The model, AutoFoley, is described in a study published June 25 in IEEE Transactions on Multimedia.

Adding sound effects in post-production using the art of Foley has been an intricate part of movie and television soundtracks since the 1930s," explains Jeff Prevost, a professor at the University of Texas at San Antonio who co-created AutoFoley. Movies would seem hollow and distant without the controlled layer of a realistic Foley soundtrack. However, the process of Foley sound synthesis therefore adds significant time and cost to the creation of a motion picture."

Intrigued by the thought of an automated Foley system, Prevost and his PhD student, Sanchita Ghose, set about creating a multi-layered machine learning program. They created two different models that could be used in the first step, which involves identifying the actions in a video and determining the appropriate sound.

The first machine learning model extracts image features (e.g., color and motion) from the frames of fast-moving action clips to determine an appropriate sound effect.

The second model analyzes the temporal relationship of an object in separate frames. By using relational reasoning to compare different frames across time, the second model can anticipate what action is taking place in the video.

In a final step, sound is synthesized to match the activity or motion predicted by one of the models. Prevost and Ghose used AutoFoley to create sound for 1,000 short movie clips capturing a number of common actions, like falling raining, a galloping horse, and a ticking clock.

Analysis shows-unsurprisingly-that AutoFoley is best at producing sounds where the timing doesn't need to align perfectly with the video (e.g., falling rain, a crackling fire). But the program is more likely to be out of sync with the video when visual scenes contain random actions with variation in time (e.g., typing, thunderstorms).

Next, Prevost and Ghose surveyed 57 local college students on which movie clips they thought included original soundtracks. In assessing soundtracks generated by the first model, 73% of students surveyed chose the synthesized AutoFoley clip as the original piece, over the true original sound clip. In assessing the second model, 66% of respondents chose the AutoFoley clip over the original sound clip.

One limitation in our approach is the requirement that the subject of classification is present in the entire video frame sequence," says Prevost, also noting that AutoFoley currently relies on a dataset with limited Foley categories. While a patent for AutoFoley is still in the early stages, Prevost says these limitations will be addressed in future research.

< Back to IEEE Journal Watch M2lp7KRjUH8
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments