by Sarah Fielding from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics on (#6X9J2)
OpenAI has launched a new web page called the safety evaluations hub to publicly share information related to things like the hallucination rates of its models. The hub will also highlight if a model produces harmful content, how well it behaves as instructed and attempted jailbreaks.The tech company claims this new page will provide additional transparency on OpenAI, a company that, for context, has faced multiple lawsuits alleging it illegally used copyrighted material to train its AI models. Oh, yeah, and it's worth mentioning that The New York Times claims the tech company accidentally deleted evidence in the newspaper's plagiarism case against it.The safety evaluations hub is meant to expand on OpenAI's system cards. They only outline a development's safety measures at launch, whereas the hub should provide ongoing updates."As the science of AI evaluation evolves, we aim to share our progress on developing more scalable ways to measure model capability and safety," OpenAI states in its announcement. "By sharing a subset of our safety evaluation results here, we hope this will not only make it easier to understand the safety performance of OpenAI systems over time, but also support community efforts to increase transparency across the field." OpenAI adds that its working to have more proactive communication in this area throughout the company.