AI Chatbots Can Infer an Alarming Amount of Info About You From Your Responses
Freeman writes:
The way you talk can reveal a lot about you-especially if you're talking to a chatbot. New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.
The phenomenon appears to stem from the way the models' algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. "It's not even clear how you fix this problem," says Martin Vechev, a computer science professor at ETH Zurich in Switzerland who led the research. "This is very, very problematic."
Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users-including their race, location, occupation, and more-from conversations that appear innocuous.
[...]
Researchers have previously shown how large language models can sometimes leak specific personal information. The companies developing these models sometimes try to scrub personal information from training data or block models from outputting it. Vechev says the ability of LLMs to infer personal information is fundamental to how they work by finding statistical correlations, which will make it far more difficult to address. "This is very different," he says. "It is much worse."
Read more of this story at SoylentNews.