OpenAI To Offer Remedies To Resolve Italy's ChatGPT Ban
The company behind ChatGPT will propose measures to resolve data privacy concerns that sparked a temporary Italian ban on the artificial intelligence chatbot, regulators said Thursday. The Associated Press reports: In a video call late Wednesday between the watchdog's commissioners and OpenAI executives including CEO Sam Altman, the company promised to set out measures to address the concerns. Those remedies have not been detailed. The Italian watchdog said it didn't want to hamper AI's development but stressed to OpenAI the importance of complying with the 27-nation EU's stringent privacy rules. The regulators imposed the ban after some users' messages and payment information were exposed to others. They also questioned whether there's a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT's algorithms and raised concerns the system could sometimes generate false information about individuals. Other regulators in Europe and elsewhere have started paying more attention after Italy's action. Ireland's Data Protection Commission said it's "following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU Data Protection Authorities in relation to this matter." France's data privacy regulator, CNIL, said it's investigating after receiving two complaints about ChatGPT. Canada's privacy commissioner also has opened an investigation into OpenAI after receiving a complaint about the suspected "collection, use and disclosure of personal information without consent." In a blog post this week, the U.K. Information Commissioner's Office warned that "organizations developing or using generative AI should be considering their data protection obligations from the outset" and design systems with data protection as a default. "This isn't optional -- if you're processing personal data, it's the law," the office said. In an apparent response to the concerns, OpenAI published a blog post Wednesday outlining its approach to AI safety. The company said it works to remove personal information from training data where feasible, fine-tune its models to reject requests for personal information of private individuals, and acts on requests to delete personal information from its systems.
Read more of this story at Slashdot.