Article 642TE Darth Vader’s voice will be AI-generated from now on

Darth Vader’s voice will be AI-generated from now on

by
Benj Edwards
from Ars Technica - All content on (#642TE)
darth_vader_pixels_1-800x448.jpg

Enlarge / As James Earl Jones retires, Darth Vader's voice will come courtesy of voice-cloning software called Respeecher. (credit: Lucasfilm / Benj Edwards)

During the creation of the Obi-Wan Kenobi TV series, James Earl Jones signed off on allowing Disney to replicate his vocal performance as Darth Vader in future projects using an AI voice-modeling tool called Respeecher, according to a Vanity Fair report published Friday.

Jones, who is 91, has voiced the iconic Star Wars villain for 45 years, starting with Star Wars: Episode IV-A New Hope in 1977 and concluding with a brief line of dialog in 2019's The Rise of Skywalker. "He had mentioned he was looking into winding down this particular character," said Matthew Wood, a supervising sound editor at Lucasfilm, during an interview with Vanity Fair. So how do we move forward?"

The answer was Respeecher, a voice cloning product from a company in Ukraine that uses deep learning to model and replicate human voices in a way that is nearly indistinguishable from the real thing. Previously, Lucasfilm had used Respeecher to clone Mark Hamill's voice for The Mandalorian, and the company thought the same technology would be ideal for a major appearance of Darth Vader that would require dozens of lines of dialog. Working from archival recordings of Jones, Respeecher created a voice model that could be "performed" vocally by another actor using the company's speech-to-speech technology.

Read 3 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments