Article 6S87V OpenAI accused of trying to profit off AI model inspection in court

OpenAI accused of trying to profit off AI model inspection in court

by
Ashley Belanger
from Ars Technica - All content on (#6S87V)
Story Image

Since ChatGPT became an instant hit roughly two years ago, tech companies around the world have rushed to release AI products while the public is still in awe of AI's seemingly radical potential to enhance their daily lives.

But at the same time, governments globally have warned it can be hard to predict how rapidly popularizing AI can harm society. Novel uses could suddenly debut and displace workers, fuel disinformation, stifle competition, or threaten national security-and those are just some of the obvious potential harms.

While governments scramble to establish systems to detect harmful applications-ideally before AI models are deployed-some of the earliest lawsuits over ChatGPT show just how hard it is for the public to crack open an AI model and find evidence of harms once a model is released into the wild. That task is seemingly only made harder by an increasingly thirsty AI industry intent on shielding models from competitors to maximize profits from emerging capabilities.

Read full article

Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments