Thumbnail 1547714
thumbnail
Large (256x256)

Articles

Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls
Because apps talking like pirates and creating ASCII art never gets old Despite worries about criminals using prompt injection to trick large language models (LLMs) into leaking sensitive data or performing other destructive actions, most of these types of AI shenanigans come from job seekers trying to get their resumes past automated HR screeners - and people protesting generative AI for various reasons, according to Russian security biz Kaspersky....
From Copilot to Copirate: How data thieves could hijack Microsoft's chatbot
Prompt injection, ASCII smuggling, and other swashbuckling attacks on the horizon Microsoft has fixed flaws in Copilot that allowed attackers to steal users' emails and other personal data by chaining together a series of LLM-specific attacks, beginning with prompt injection....
1