Anthropic's Claude Improves On ChatGPT But Still Suffers From Limitations
An anonymous reader quotes a report from TechCrunch: Anthropic, the startup co-founded by ex-OpenAI employees that's raised over $700 million in funding to date, has developed an AI system similar to OpenAI's ChatGPT that appears to improve upon the original in key ways. Called Claude, Anthropic's system is accessible through a Slack integration as part of a closed beta. Claude was created using a technique Anthropic developed called "constitutional AI." As the company explains in a recent Twitter thread, "constitutional AI" aims to provide a "principle-based" approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide. To engineer Claude, Anthropic started with a list of around ten principles that, taken together, formed a sort of "constitution" (hence the name "constitutional AI"). The principles haven't been made public, but Anthropic says they're grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice). Anthropic then had an AI system -- not Claude -- use the principles for self-improvement, writing responses to a variety of prompts (e.g., "compose a poem in the style of John Keats") and revising the responses in accordance with the constitution. The AI explored possible responses to thousands of prompts and curated those most consistent with the constitution, which Anthropic distilled into a single model. This model was used to train Claude. Claude, otherwise, is essentially a statistical tool to predict words -- much like ChatGPT and other so-called language models. Fed an enormous number of examples of text from the web, Claude learned how likely words are to occur based on patterns such as the semantic context of surrounding text. As a result, Claude can hold an open-ended conversation, tell jokes and wax philosophic on a broad range of subjects. [...] So what's the takeaway? Judging by secondhand reports, Claude is a smidge better than ChatGPT in some areas, particularly humor, thanks to its "constitutional AI" approach. But if the limitations are anything to go by, language and dialogue is far from a solved challenge in AI. Barring our own testing, some questions about Claude remain unanswered, like whether it regurgitates the information -- true and false, and inclusive of blatantly racist and sexist perspectives -- it was trained on as often as ChatGPT. Assuming it does, Claude is unlikely to sway platforms and organizations from their present, largely restrictive policies on language models. Anthropic says that it plans to refine Claude and potentially open the beta to more people down the line. Hopefully, that comes to pass -- and results in more tangible, measurable improvements.
Read more of this story at Slashdot.