An AI Privacy Conundrum? The Neural Net Knows More Than It Says
upstart writes:
Submitted via IRC for SoyCow3196
An AI privacy conundrum? The neural net knows more than it says
Artificial intelligence is the process of using a machine such as a neural network to say things about data. Most times, what is said is a simple affair, like classifying pictures into cats and dogs.
Increasingly, though, AI scientists are posing questions about what the neural network "knows," if you will, that is not captured in simple goals such as classifying pictures or generating fake text and images.
It turns out there's a lot left unsaid, even if computers don't really know anything in the sense a person does. Neural networks, it seems, can retain a memory of specific training data, which could open individuals whose data is captured in the training activity to violations of privacy.
For example, Nicholas Carlini, formerly a student at UC Berkeley's AI lab, approached the problem of what computers "memorize" about training data, in work done with colleagues at Berkeley. (Carlini is now with Google's Brain unit.) In July, in a paper provocatively titled, "The Secret Sharer," posted on the arXiv pre-print server, Carlini and colleagues discussed how a neural network could retain specific pieces of data from a collection of data used to train the network to generate text. That has the potential to let malicious agents mine a neural net for sensitive data such as credit card numbers and social security numbers.
Those are exactly the pieces of data the researchers discovered when they trained a language model using so-called long short-term memory neural networks, or "LSTMs."
Read more of this story at SoylentNews.