Microsoft and OpenAI Face A $3B Lawsuit Over Privacy Violations
As many as sixteen pseudonymous individuals on Wednesday filed a lawsuit against Microsoft and OpenAI, accusing them of trampling privacy with ChatGPT.
According to the lawsuit, the companies' ChatGPT-based AI products collected and divulged personal information without appropriate notice or consent.
Despite established protocols for the purchase and use of personal information, Defendants took a different approach: theft.Official complaintThe plaintiffs accused Microsoft and OpenAI of scraping 300 billion words from the internet, books, articles, websites, and posts - including personal information obtained without consent.'"
The complaint alleged that rather than registering as a data broker like it was supposed to, OpenAI did it in secrecy.
What's the Complaint About - A More Detailed LookFiled in a San Francisco Federal Court, the lawsuit claimed that Microsoft and OpenAI collect, store, track, share, and disclose" personal information belonging to millions of people through their AI products.
The personal information in question includes names, account information, product details, payment details, transaction records, emails, login credentials, contact details, browser data, analytics, cookies, chat logs, usage data, searches, social media information, and other online activities.
According to the complaint, Microsoft and OpenAI have integrated their AI products with the personal information of millions of people.The stolen" information reflects their voting records, political views, religious beliefs, hobbies, sexual orientations and gender identities, work histories, social and support group membership, family photos, friends, etc.
It claimed that the defendants have failed to filter personally identifiable information out of its training models, which means such information could potentially be disclosed to people worldwide.
The complaint is 157 pages long and focuses largely on media and academic citations on the potential dangers of AI models and ethics. However, it's relatively light when it comes to specific instances of harm caused by AI products.
How Does the Situation Look for Microsoft and OpenAI?The lawsuit has accused Microsoft and OpenAI of violating the Computer Fraud and Abuse Act by intercepting interaction data via plugins. It also went on to contend that the companies violated America's Electronic Privacy Communications Act by obtaining and using private information and by using integrations with ChatGPT and similar products to intercept communications between users and third-party services.
Other laws that Microsoft and OpenAI have violated, as per the complaint, include:
- Illinois Biometric Information Privacy Act
- The Consumer Fraud And Deceptive Business Practices Act
- The California Invasion of Privacy Act, and
- The New York Business Law
The lawsuit seeks class-action certification and damages of $3 billion. However, the figure is likely just a placeholder since the actual damages, if any, will be decided based on the court's findings if the plaintiffs win the lawsuit.
It also remains to be seen whether and how the content and metadata created by plaintiffs have actually been exploited and whether the exploited data (if any) will be reproduced by ChatGPT or other AI models.
Currently, Microsoft and OpenAI also face a lawsuit filed against them last November for violating licensing requirements by allegedly reproducing the code created by millions of software developers through the Copilot service. The new lawsuit further puts them in a tough position, but whether or not the companies really violated the said laws is yet to be determined.
The post Microsoft and OpenAI Face A $3B Lawsuit Over Privacy Violations appeared first on The Tech Report.