[$] A flood of useful security reports
The idea of using large language models (LLMs) to discover security problems isnot new. Google's Project Zeroinvestigatedthe feasibility of using LLMs for security research in 2024. At the time, theyfound that models could identify real problems, but required a good deal ofstructure and hand-holding to do so on small benchmark problems. In February2026, Anthropicpublished a reportclaiming that the company's most recent LLM at that point in time, Claude Opus 4.6, had discoveredreal-world vulnerabilities in critical open-source software, including the Linuxkernel, with far less scaffolding. On April7, Anthropic announced a new experimental model that issupposedly even better; which they havepartnered with the Linux Foundationto supply to some open-source developers with access to the tool for security reviews.LLMs seem to have progressed significantly in the last few months, a changewhich is being noticed in the open-source community.