Thumbnail 1653288
thumbnail
Large (256x256)

Articles

Judge rules Anthropic's AI training on copyrighted materials is fair use
Anthropic has received a mixed result in a class action lawsuit brought by a group of authors who claimed the company used their copyrighted creations without permission. On the positive side for the artificial intelligence company, senior district judge William Alsup of the US District Court for the Northern District of California determined that Anthropic's training of its AI tools on copyrighted works was protected as fair use.Developing large language models for artificial intelligence has created a copyright law boondoggle as creators attempt to protect their works and tech companies skirt rules or find loopholes to gather more training materials. Alsup's ruling is one of the first that will likely set the foundation for legal precedents around what AI tools can and cannot do.Using copyright materials can be deemed fair use if the output is determined to be "transformative," or not a substitute for the original work. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup wrote.Despite the fair use designation, the ruling does still provide some recourse for the writers; they can choose to take Anthropic to court for piracy. "Anthropic downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies in its library even after deciding it would not use them to train its AI (at all or ever again)," Alsup wrote. "Authors argue Anthropic should have paid for these pirated library copies. This order agrees."This article originally appeared on Engadget at https://www.engadget.com/ai/judge-rules-anthropics-ai-training-on-copyrighted-materials-is-fair-use-182602056.html?src=rss
Anthropic offers its Claude AI model to the federal government for $1
Anthropic has announced it will offer its Claude AI model to all three branches of the US government for $1, following OpenAI offering an almost identical deal last week. These deals both follow the General Services Administration adding OpenAI, Gemini and Anthropic to a list of approved AI vendors for the federal government.Similar to the OpenAI deal, Anthropic will offer access to its commercial-tier service Claude for Enterprise for a period of one year at a cost of just $1. The offer will also encompass Claude for Government, which supports FedRAMP High workloads, allowing federal workers to use Claude for sensitive unclassified work. Government department or agency leadership can reach out today to gain access.Anthropic is no stranger to working within the federal government. Earlier this summer, the Department of Defense awarded Anthropic, Google, OpenAI and XAI with deals worth up to $200 million to develop military applications.The company made no larger mention of the Trump administration's AI Action Plan, or its requirement that large language models used by the federal government be "free from top-down ideological bias." The tacit understanding is that these LLMs not espouse support for anything the current administration opposes. President Trump even issued an executive order decreeing that AI must not favor "ideological dogmas such as DEI," in order to work with the federal government.This latest deal comes as AI-related companies are increasingly looking to build close relationships with policymakers and the current administration. This week, NVIDIA agreed to a revenue-sharing agreement with the US government in order to sell its H20 AI GPUs to China. The current administration has made no secret of its wish for federal agencies to maximize their use of AI.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-offers-its-claude-ai-model-to-the-federal-government-for-1-154217798.html?src=rss
1