Nvidia’s AI software tricked into leaking data
by Financial Times from Ars Technica - All content on (#6C4K7)
Enlarge (credit: VGG | Getty Images)
A feature in Nvidia's artificial intelligence software can be manipulated into ignoring safety restraints and reveal private information, according to new research.
Nvidia has created a system called the NeMo Framework," which allows developers to work with a range of large language models-the underlying technology that powers generative AI products such as chatbots.
The chipmaker's framework is designed to be adopted by businesses, such as using a company's proprietary data alongside language models to provide responses to questions-a feature that could, for example, replicate the work of customer service representatives, or advise people seeking simple health care advice.