Microsoft’s new safety system can catch hallucinations in its customers’ AI apps
by Emilia David from The Verge - All Posts on (#6KPM1)
Illustration: The Verge
Sarah Bird, Microsoft's chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren't hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations that are plausible yet unsupported," and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.
We know that customers don't all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a...