DEF CON Leverages Red Teaming to Challenge Hackers
The annual DEF CON conference is set to launch a mass hacking event targeting legal AI language models (LLMs) this year. The said event will analyze the security and robustness of integral artificial intelligence systems.
The DEFCON will run from August 10-13th, 2023, in Las Vegas, Nevada (USA).The organizers of this collaborative convention describe it as the largest red teaming exercise ever for any group of AI models." Here, hackers will participate, dig deeper, and find bugs and biases in popular LLMs from giants like Anthropic, Google, OpenAI, etc.
At the event, the organizers invited a large pool of people. It will include numerous students from apparently ignored communities and institutions. They will be handed over the responsibility to find loopholes in LLMs that drive the modern-era generative AI and chatbots.
The Machine Learning Specific BiasesWhile conventional bugs are more prominent in traditional coding, they may create more machine learning-specific problems. They include hallucinations, jailbreaks, and biases. In fact, these days, security and ethical professionals are vigorously struggling to tackle these technology flaws.
According to the founder of AI Village, Sven Cattell, companies have always been solving these biases with adept red teams. However, the work hasn't been carried out publicly in most cases.
The diverse issues with these models will not be resolved until more people know how to red team and assess them.Sven CattellnCattell has designed the Hack the LLM" contest to expose the potential weaknesses in the security of these systems. This will further help developers and cybersecurity experts address the issues proactively. According to the DEF CON organizers, this event will successfully spread awareness about the cruciality of security LLMs. Besides, it may encourage the idea of developing advanced security measures with the help of specialized red teaming.
Cattell expects to see bug bounties and live hacking events transformed to align with the ML model-based systems.
He believes that this initiative will serve two purposes simultaneously. First, it will address the harms. Next, it will build a knowledgeable network of well-acquainted researchers with the fixes.
The Proposed PlanThe AI Village has proposed providing the red teaming participants with laptops. Besides, they will gain timed access to LLMs from different vendors. As of now, they will work with models from Stability, Anthropic, Hugging Face, Google, Nvidia, and OpenAI. Furthermore, the red teams can access a scale AI-developed evaluation platform. A capture-the-flag-style point system will drive the testing of an extensive range of threats.
AI Village has declared that the participants who will secure the highest points will win topline GPU from Nvidia.The event has got backing from America's National Science Foundation, the Congregational AI Caucus, and the White House Office of Science.
It is being said that AI Village has announced the event in the aftermath of the meeting between vice president Kamla Harris and OpenAI, Anthropic, Google, and Microsoft's spokespersons. The meeting was held to discuss and identify the potential risk of AI language models in terms of national and individual security.
The post DEF CON Leverages Red Teaming to Challenge Hackers appeared first on The Tech Report.