Nvidia Says It Can Prevent Chatbots From Hallucinating
upstart writes:
Nvidia, the tech giant responsible for inventing the first GPU -- a now crucial piece of technology for generative AI models, unveiled a new software on Tuesday that has the potential to solve a big problem with AI chatbots.
The software, NeMo Guardrails, is supposed to ensure that smart applications, such as AI chatbots, powered by large language models (LLMs) are "accurate, appropriate, on topic and secure," according to Nvidia.
The open-source software can be used by AI developers can utilize to set up three types of boundaries for AI models: Topical, safety, and security guardrails.
[...] The safety guardrails are an attempt to tackle the issue of misinformation and hallucinations.
When employed, it will ensure that AI applications respond with accurate and appropriate information. For example, by using the software, bans on inappropriate language and credible source citations can be reinforced.
[...] Nvidia claims that virtually all software developers will be able to use NeMo Guardrails since they are simple to use, work with a broad range of LLM-enabled applications, and work with all the tools that enterprise app developers use such as LangChain.
The company will be incorporating NeMo Guardrails into its Nvidia NeMo framework, which is already mostly available as an open-source code on GitHub.
Read more of this story at SoylentNews.