Meta Wants Llama 3 To Handle Contentious Questions as Google Grapples With Gemini Backlash
An anonymous reader shares a report (paywalled): As Google grapples with the backlash over the historically inaccurate responses on its Gemini chatbot, Meta Platforms is dealing with a related issue. As part of its work on the forthcoming version of its large language model, Llama 3, Meta is trying to overcome a problem perceived in Llama 2: Its answers to anything at all contentious aren't helpful. Safeguards added to Llama 2, which Meta released last July and which powers the artificial intelligence assistant in its apps, prevent the LLM from answering a broad range of questions deemed controversial. These guardrails have made Llama 2 appear too "safe" in the eyes of Meta's senior leadership, as well as among some researchers who worked on the model itself, according to people who work at Meta. [...] Meta's conservative approach with Llama 2 was designed to ward off any public relations disasters, said the people who work at Meta. But researchers are now trying to loosen up Llama 3 so it engages more with users when they ask about difficult topics, offering context rather than just shutting down tricky questions, said two of the people who work at Meta. The new version of the model will in theory be able to better distinguish when a word has multiple meanings. For example, Llama 3 might understand that a question about how to kill a vehicle's engine means asking how to shut it off rather than end its life. Meta also plans to appoint someone internally in the coming weeks to oversee tone and safety training as part of its efforts to make the model's responses more nuanced, said one of the people. The company plans to release Llama 3 in July, though the timeline could still change, they added.
Read more of this story at Slashdot.