Article 6HCHF ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo'

ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo'

by
EditorDavid
from Slashdot on (#6HCHF)
The New York Times reports:Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him. My contact information was included in a list of business and personal email addresses for more than 30 New York Times employees that a research team, including Mr. Zhu, had managed to extract from GPT-3.5 Turbo in the fall of this year. With some work, the team had been able to "bypass the model's restrictions on responding to privacy-related queries," Mr. Zhu wrote. My email address is not a secret. But the success of the researchers' experiment should ring alarm bells because it reveals the potential for ChatGPT, and generative A.I. tools like it, to reveal much more sensitive personal information with just a bit of tweaking. When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has "learned" from reams of information - training data that was used to feed and develop the model - to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim... In the example output they provided for Times employees, many of the personal email addresses were either off by a few characters or entirely wrong. But 80 percent of the work addresses the model returned were correct. The researchers used the API for accessing ChatGPT, the article notes, where "requests that would typically be denied in the ChatGPT interface were accepted..." "The vulnerability is particularly concerning because no one - apart from a limited number of OpenAI employees - really knows what lurks in ChatGPT's training-data memory." And there was a broader related warning in another article published the same day. Microsoft may be building an AI silo in a walled garden, argues a professor at the University of California, Berkeley's school of information, calling the development "detrimental for technology development, as well as costly and potentially dangerous for society and the economy."[In January] Microsoft sealed its OpenAI relationship with another major investment - this time around $10 billion, much of which was, once again, in the form of cloud credits instead of conventional finance. In return, OpenAI agreed to run and power its AI exclusively through Microsoft's Azure cloud and granted Microsoft certain rights to its intellectual property... Recent reports that U.K. competition authorities and the U.S. Federal Trade Commission are scrutinizing Microsoft's investment in OpenAI are encouraging. But Microsoft's failure to report these investments for what they are - a de facto acquisition - demonstrates that the company is keenly aware of the stakes and has taken advantage of OpenAI's somewhat peculiar legal status as a non-profit entity to work around the rules... The U.S. government needs to quickly step in and reverse the negative momentum that is pushing AI into walled gardens. The longer it waits, the harder it will be, both politically and technically, to re-introduce robust competition and the open ecosystem that society needs to maximize the benefits and manage the risks of AI technology.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments