Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls

Despite worries about criminals using prompt injection to trick large language models (LLMs) into leaking sensitive data or performing other destructive actions, most of these types of AI shenanigans come from job seekers trying to get their resumes past automated HR screeners - and people protesting generative AI for various reasons, according to Russian security biz Kaspersky....