MLCommons Launches LLM Safety Benchmark
by staff from High-Performance Computing News Analysis | insideHPC on (#6SPY6)
Dec. 4, 2024 - MLCommons today released AILuminate, a safety test for large language models. The v1.0 benchmark - which provides a series of safety grades for the most widely-used LLMs - is the first AI safety benchmark designed collaboratively by AI researchers and industry experts, according to MLCommons. It builds on MLCommons' track record [...]
The post MLCommons Launches LLM Safety Benchmark appeared first on High-Performance Computing News Analysis | insideHPC.