Article 6JPZD Microsoft President: 'You Can't Believe Every Video You See or Audio You Hear'

Microsoft President: 'You Can't Believe Every Video You See or Audio You Hear'

by
EditorDavid
from Slashdot on (#6JPZD)
"We're currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors," writes Microsoft VP Brad Smith, "including through deepfakes based on AI-generated video, audio, and images. "This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying." Microsoft found its own tools being used in a recently-publicized episode, and the VP writes that "We need to act with urgency to combat all these problems." Microsoft's blog post says they're "committed as a company to a robust and comprehensive approach," citing six different areas of focus: A strong safety architecture. This includes "ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system... based on strong and broad-based data analysis." Durable media provenance and watermarking. ("Last year at our Build 2023 conference, we announced media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history.") Safeguarding our services from abusive content and conduct. ("We are committed to identifying and removing deceptive and abusive content" hosted on services including LinkedIn and Microsoft's Gaming network.) Robust collaboration across industry and with governments and civil society. This includes "others in the tech sector" and "proactive efforts" with both civil society groups and "appropriate collaboration with governments." Modernized legislation to protect people from the abuse of technology. "We look forward to contributing ideas and supporting new initiatives by governments around the world." Public awareness and education. "We need to help people learn how to spot the differences between legitimate and fake content, including with watermarking. This will require new public education tools and programs, including in close collaboration with civil society and leaders across society."Thanks to long-time Slashdot reader theodp for sharing the article

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments