Article 6JN8M AI-powered romantic chatbots are a privacy nightmare

AI-powered romantic chatbots are a privacy nightmare

by
WIRED
from Ars Technica - All content on (#6JN8M)
keyboard-heart-800x533.jpg

Enlarge (credit: iStock via Getty Images)

You shouldn't trust any answers a chatbot sends you. And you probably shouldn't trust it with your personal information either. That's especially true for AI girlfriends" or AI boyfriends," according to new research.

An analysis of 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people's data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people's privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people's chat messages could be abused by hackers.

Read 15 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments