Article 6XFNY Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development?

Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development?

by
Krishi Chowdhary
from Techreport on (#6XFNY)
google-synthid-detector-1200x686.jpg

Key Takeaways

  • Google has introduced SynthID Detector, a powerful tool that can detect AI-generated content.
  • It works by identifying SynthID-generated watermarks in content served up by Google AI tools, such as Imagen, Gemini, and Lyria.
  • The detector is currently in the testing phase and only available for use by joining a waitlist.
  • SynthID Detector is also open-source, allowing anyone to build on the tech architecture.
google-synthid-detector-1200x686.jpg

Google has launched SynthID Detector, a tool that can recognize any content generated through the Google suite of AI tools.

SynthID, in case you didn't know, is a state-of-the-art watermarking tool launched by Google in August 2023. This technology adds a watermark on AI-generated content, which is not visible to the naked eye.

Initially, SynthID was launched only for AI-generated images, but it has now been extended to text, video, and audio content generated using tools like Imagen, Gemini, Lyria, and Veo.

The detector uses this SynthID watermarking to identify AI content. When you upload an image, audio, or video to the detector tool, it'll look for this watermark. If it finds one, it'll highlight the part of the content that is most likely to be watermarked.

It's worth noting, though, that the SynthID Detector is currently in the testing phase. Google has released a waitlist form for researchers, journalists, and media professionals.

Google-SynthID-Detector-1200x587.png

Google has also partnered with NVIDIA to watermark videos generated on their NVIDIA Cosmos AI model. More importantly, Google announced a partnership with GetReal Security, which is a leading pioneer in detecting deepfake media and has raised around $17.5 million in equity funding.

We're likely to see an increasing number of such partnerships from Google's end, meaning SynthID Detector's scope will keep broadening. So, you'll be able to detect not just Google-generated AI content but also content generated with other AI platforms.

The Need for SynthID Detector

Notwithstanding all of the benefits that artificial intelligence has brought us, it has also become a powerful tool in the hands of criminals. We have seen hundreds of incidents where innocent people were scammed or threatened using AI-generated content.

For example, on May 13, Sandra Rogers, a Lackawanna County woman, was found guilty of possessing AI-generated child sex abuse images. In another incident, a 17-year-old kid extorted personal information from 19 victims by creating sexually explicit deepfakes and threatening to leak them.

A man in China was scammed out of $622,000 by a scammer using an AI-generated voice over the phone impersonating the man's friend. Similar scams have become popular in the US and even in countries like India that aren't really at the forefront of AI technology.

In addition to crimes against civilians, AI is also being used to cause a lot of political unrest. For instance, a consultant was fined $6M for using fake robocalls during the US presidential elections. He used AI to mimic Joe Biden's voice and urged voters in New Hampshire not to vote in the state's Democratic primary.

Back in 2022, a fake video of Ukrainian President Zelensky was broadcast on Ukraine 24, a Ukrainian news website, which was allegedly hacked. The fake AI video showed Zelensky apparently surrendering to Russia and laying down arms.'

This is only the tip of the iceberg. The internet is filled with such cases, with newer ones coming out almost every single day. AI is increasingly being weaponized against institutions, government, and the societal order to cause political and social unrest.

Dangers-of-AI.jpgImage Credit - Statista

Therefore, a tool like SynthID Detector can be a beacon of hope to combat such perpetrators. News houses, publications, and regulators can run a suspected image or content through the detector to verify a story before running it for millions to view.

More importantly, tools like SynthID will also go a long way in instilling some semblance of fear among criminals, who will know that they can be busted anytime.

And What About the Legal Grey Area of AI Usage?

Besides the above outright illegal use of AI, there's also a moral dilemma attached to increasing AI use. Educators are specifically worried about the use of LLMs and text-generating AI models in schools, colleges, and universities.

Instead of putting in the hard yards, students now just punch in a couple of prompts to generate detailed, human-like articles and assignments. Research at the University of Pennsylvania formed two groups of students: one with access to ChatGPT and another without any such LLM tools.

The students who had used ChatGPT could solve 48% more mathematical problems correctly. However, when a test was conducted, the students who had used ChatGPT solved 17% fewer problems than those who didn't.

This shows that the use of LLM models isn't really contributing to learning and academic development. They're, instead, tools to simply complete tasks,' which is slowly robbing us of our ability to think.

Another study called AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking' shows that people in the age group 17-25 have the highest AI usage as well as the lowest critical thinking scores. Coincidence? We don't think so.

Clearly, the use of AI tools isn't contributing to the development of young minds. Instead, it has become a watchdog for laziness for people who wish to cut corners.

We call this a moral dilemma because the use of AI tools for education or any other purpose, for that matter, is not illegal. Instead, it's more of a conscious decision to let go of our own critical thinking, which, as most would argue, is what makes us human.

Contemporary AI Detectors Are Worthless

Because AI is replacing critical thinking and being used to outsource work by students, it's understandable why educational institutions have resorted to AI detectors to check for the presence of AI-generated content in student submissions and assignments.

However, these AI detectors are no more accurate than a blind person telling you the way ahead. Apologies if we stepped on any toes here! We forgot our stick!

Christopher Penn, an AI expert, made a post on LinkedIn titled AI Detectors are a joke.' He fed the US Declaration of Independence to a market-leading' AI detector, and guess what? Apparently, our forefathers used 97% AI to pen down the Declaration. Time travel?

Faulty-AI-content-detectors.png

The inaccurate results from these detectors stem from their use of parameters such as perplexity and burstiness to analyze texts. Consequently, if you write an article that sounds somewhat robotic, lacks vocabulary variety, and features similar line lengths, these AI detectors' may classify your work as that of an AI language model.

Bottom line, these tools are not reliable, which is possibly why OpenAI discontinued its AI detection tool in mid-2023, citing accuracy issues. However, the sad part is that a large part of the system, including universities, still relies on these tools to make major decisions such as student expulsions and suspensions.

This is exactly why we need a better and more reliable tool to call out AI-generated content. Enter SynthID Detector.

SynthID Detector Is Open-Source

Possibly the biggest piece of positive news with regard to Google's SynthID Detector announcement is that the tool has been kept open source. This will allow other companies and creators to build on the existing architecture and incorporate AI watermark detection in their own artificial intelligence models.

Remember, SynthID Detector currently only works for Google's AI tools, which is just a small part of the whole artificial intelligence market. So, if someone generates a text using ChatGPT, there's still no reliable way to tell if it was AI-generated.

Maybe that's why Google has kept the detector open-source, hoping that other developers would take a cue from it.

All in all, it's really appreciable that Google hasn't gate-kept this essential development. Other companies that are concerned about the increasing misuse of their AI models should go ahead and contribute to the greater good of making AI safe for society.

The post Google Launches SynthID Detector - A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development? appeared first on Techreport.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments