AI Becomes The Cybercriminal’s New Arsenal – Warns Canadian Cybersecurity Chief
Artificial Intelligence (AI), a technology once predominantly associated with driving innovation and efficiency, is now being exploited by malicious entities, says Sami Khoury, Head of the Canadian Centre for Cyber Security.
Speaking to the media recently, Khoury detailed how cybercriminals are increasingly harnessing AI's advanced capabilities for illicit purposes.
This includes crafting phishing emails, spreading disinformation, and writing malicious code.While Khoury did not present any concrete evidence in his disclosure, his remarks conveyed an increased urgency.
This urgency resonates within the cybersecurity sector regarding the potential misuse of emerging AI technology.
AI Misuse - Warnings and Real-World CasesThe potential perils of AI are not a recent revelation. Over the past months, several cybersecurity watchdogs have warned about the hypothetical risks associated with AI - especially Large Language Models (LLMs).
Cybersecurity researchers have started reporting encounters with suspected AI-generated content in real-world scenarios.These sophisticated AI tools can produce compelling dialogues and written content by harnessing enormous volumes of text data, posing a formidable threat if wielded by unscrupulous actors.
In a report published in March, Europol highlighted the potential misuse of AI tools such as OpenAI's ChatGPT.
They warned about such models being used to impersonate individuals or organizations, requiring only a rudimentary knowledge of English.
Britain's National Cyber Security Centre echoed similar concerns in the same month. It warned that cybercriminals could exploit LLMs to boost their cyberattack capabilities beyond their current reach.
Significantly, these warnings are no longer theoretical.Last week, a former hacker claimed to have discovered an LLM that had been trained on malicious material.This model could craft a persuasive request for a cash transfer, underscoring the potential for misuse.
The Challenge of Keeping PaceWhile AI's role in creating malicious code is still nascent, Khoury expressed concerns about the rapid development of AI models.
He asserted that the pace of AI evolution makes it challenging for cybersecurity experts to fully assess their malicious potential before infiltrating the broader digital environment.
The concern is what's coming around the corner.Sami KhouryHis comments resonate with a growing awareness that the rapid advancement of technology can also fuel cybercrime. As the benefits of AI continue to unfold, so too do the potential risks.
This increasing use of AI in cybercrime should not induce panic, but it does necessitate decisive action.However, the cybersecurity industry is not standing idle in the face of these threats. Cybersecurity companies and researchers are actively studying the use of AI in hacking and misinformation campaigns. They aim to understand and counteract the strategies employed by rogue actors.
Additionally, international bodies are escalating their efforts to combat these cybercriminals. This year, the United Nations discussed the importance of international cooperation in combating cybercrime, highlighting the need for a global response to this global issue.
As AI technology advances, efforts to understand and mitigate its potential misuse need to keep pace. Vigilance and adaptability will undoubtedly play a pivotal role in shaping the future narrative of artificial intelligence and cybersecurity.
The post AI Becomes The Cybercriminal's New Arsenal - Warns Canadian Cybersecurity Chief appeared first on The Tech Report.