A New Trick Uses AI to Jailbreak AI Models—Including GPT-4 by Will Knight from on 2023-12-05 11:00 (#6GX9G) Adversarial algorithms can systematically probe large language models like OpenAI's GPT-4 for weaknesses that can make them misbehave.