The biggest AI companies agree to crack down on child abuse images
by Emilia David from The Verge - All Posts on (#6M9QD)
Illustration by Cath Virginia / The Verge | Photos from Getty Images
Tech companies like Google, Meta, OpenAI, Microsoft, and Amazon committed today to reviewing their AI training data for child sexual abuse material (CSAM) and removing it from use in any future models.
The companies signed on to a new set of principles meant to limit the proliferation of CSAM. They promise to ensure training datasets do not contain CSAM, to avoid datasets with a high risk of including CSAM, and to remove CSAM imagery or links to CSAM from data sources. The companies also commit to stress-testing" AI models to ensure they don't generate any CSAM imagery and to only release models if these have been evaluated for child safety.
Other signatories include Anthropic, Civitai, Metaphysic, Mistral AI, and Stability AI.
G...