OpenAI's Sam Altman on iPhones, Music, Training Data, and Apple's Controversial iPad Ad
OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg). And speaking on technology's advance, Altman said "Phones are unbelievably good.... I personally think the iPhone is like the greatest piece of technology humanity has ever made. It's really a wonderful product." Q: What comes after it? Altman: I don't know. I mean, that was what I was saying. It's so good, that to get beyond it, I think the bar is quite high. Q: You've been working with Jony Ive on something, right? Altman: We've been discussing ideas, but I don't - like, if I knew... Altman said later he thought voice interaction "feels like a different way to use a computer." But the conversation turned to Apple in another way. It happened in a larger conversation where Altman said OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..." Altman: Even the world in which - if we went and, let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that - let's say we could still make a great music model, which maybe we could. I was posing that as a thought experiment to musicians, and they were like, "Well, I can't object to that on any principle basis at that point - and yet there's still something I don't like about it." Now, that's not a reason not to do it, um, necessarily, but it is - did you see that ad that Apple put out... of like squishing all of human creativity down into one really iPad...? There's something about - I'm obviously hugely positive on AI - but there is something that I think is beautiful about human creativity and human artistic expression. And, you know, for an AI that just does better science, like, "Great. Bring that on." But an AI that is going to do this deeply beautiful human creative expression? I think we should figure out - it's going to happen. It's going to be a tool that will lead us to greater creative heights. But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here. What about creators whose copyrighted materials are used for training data? Altman had a ready answer - but also some predictions for the future. "On fair use, I think we have a very reasonable position under the current law. But I think AI is so different that for things like art, we'll need to think about them in different ways..." Altman:I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time, as training data becomes less valuable and what the system does accessing information in context, in real-time... what happens at inference time will become more debated, and what the new economic model is there. Altman gave the example of an AI which was never trained on any Taylor Swift songs - but could still respond to a prompt requesting a song in her style. Altman: And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, how should Taylor get paid? So I think there's an opt-in, opt-out in that case, first of all - and then there's an economic model. Altman also wondered if there's lessons in the history and economics of music sampling...
Read more of this story at Slashdot.