Design Patterns for Securing LLM Agents Against Prompt Injections from Hacker News on 2025-06-13 13:27 (#6XZ6E) Comments