OpenAI transcribed over a million hours of YouTube videos to train GPT-4
by Wes Davis from The Verge - All Posts on (#6KWWW)
Cath Virginia / The Verge | Photos from Getty Images
Earlier this week, The Wall Street Journal reported that AI companies were running into a wall when it comes to gathering high-quality training data. Today, The New York Times detailed some of the ways companies have dealt with this. Unsurprisingly, it involves doing things that fall into the hazy gray area of AI copyright law.
The story opens on OpenAI which, desperate for training data, reportedly developed its Whisper audio transcription model to get over the hump, transcribing over a million hours of YouTube videos to train GPT-4, its most advanced large language model. That's according to The New York Times, which reports that the company knew this was legally questionable but believed it to be fair use. OpenAI president Greg...