When Even Hollywood Doesn’t Want To Expand Copyright Laws To Deal With AI…
We live in strange times. It used to be that you could set your watch to one simple thing: any time any government agency or policymaker had any question about whether or not we needed to expand copyright laws, Hollywood would answer with a resounding YES, ABSOLUTELY!" Over the years, copyright has expanded massively, and always right there pushing that along gleefully has been the Motion Picture Association (MPA, formerly the MPAA).
So... it took me a moment to recalibrate my brain when I saw that in the recent Copyright Office docket on Artificial Intelligence and Copyright (the one where we filed a comment saying that we should not be changing copyright law to deal with AI), that the MPA... agrees with us (at least a little bit)?
MPA's overarching view, based on the current state, is that while AI technologies raise a host of novel questions, those questions implicate well-established copyright law doctrines and principles. At present, there is no reason to conclude that these existing doctrines and principles will be inadequate to provide courts and the Copyright Office with the tools they need to answer AI-related questions as and when they arise. The Copyright Office has an important role to play in ensuring a careful and considered approach to AI and copyright. At the current time, however, there is no need for legislation or special rules to apply copyright law in the context of AI
Of course, it's easier to understand this in the context of the recent TV and film writers' strike as well as the still ongoing screen actors strike, in which AI is a central part of the debate. I think the MPA is (perhaps reasonably?) concerned that opening up copyright laws at this point might lead to problematic limits on how AI can be used, when the movie studios would actually like to make use of the technology in fairly reasonable ways.
That said, I don't endorse the MPA's full statement, which I think is just wrong on many accounts. It is not arguing that training on public works is not infringing (as we believe), but rather existing copyright law already covers that situation.
Sweeping generalizations that training is always, or is never, fair use are not helpful. For example, in moving to dismiss a lawsuit brought in the Northern District of California by anonymous individuals, including an author, Google stated that training Generative AI models on information publicly shared on the internet" categorically is not copyright infringement." The premise of this argument is that if a copyrighted work is accessible on the internet, it is free for the taking. That premise is flatly wrong and unsupported by case law.
Likewise, in comments before the House Subcommittee on the Courts, Intellectual Property, and the Internet, Sy Damle stated: Foundational copyright cases establish that the use of copyright-eligible content to create non-infringing works is protected fair use, even if the non-infringing works compete with the originals." That sweeping proposition is fundamentally inconsistent with the fact-intensive nature of fair use and is not supported by the case law. These comments cited HathiTrust, Authors Guild v. Google, and Sega. But the courts in those cases did not announce the broad rule for which the comments cite them. On the contrary, the courts found the particular uses in those cases fair only after applying the statutory factors to the specific facts before them.
It's an interesting move to argue that Damle, former General Counsel of the Copyright Office, is somehow saying things not supported by the case law" though I guess that's what you get when the MPA is dealing with the rare former GC from the Copyright Office who seemed to actually understand the limits of copyright law.
But, still, I just want to note that this is pretty much the only time I can remember of the MPA having a chance to say let's expand copyright laws!" and coming back with a wait, wait, wait, we're just fine as is."
Also, for what it's worth, the MPA and Copia feel very strongly different about some of the other points we both answered in this inquiry. We questioned the whole premise of having to opt-in to having your content as part of the training, and the MPA is arguing that that training may be infringing, especially in commercial contexts, and therefore would require a license.
Of course, this is a weird time with weird bedfellows. So while the MPA is saying we don't need to change copyright laws," internet giant IAC and Barry Diller, are saying of course we need to change copyright law to protect against evil AI."
This really isn't a surprise, either, though. Diller/IAC also own Dotdash Meredith, a big publisher of vertical content. And they want to get paid. Diller has been whining incessantly for months about how fair use needs to be changed to make sure he gets paid from AI training on their content.
All we want to do is establish that there is no such thing as fair use for AI, which gives us standing."
Which is, well, quite a statement.
Still, it's interesting to me to see how the whole AI space is shifting roles a bit. Historically, internet properties have been against expanding copyright laws, while Hollywood has been in favor of it. Here, those roles are a little different, though in both cases it suggests that big companies want what's in the best interest of their own bottom lines." Which, I guess, isn't that surprising.
Of course, that leaves little nobodies like us to take the actual principled stand regarding what's actually best for the internet, for people, and for innovation.