Apple study exposes deep cracks in LLMs’ “reasoning” capabilities by Kyle Orland from Ars Technica - All content on 2024-10-14 21:21 (#6RFDD) Irrelevant red herrings lead to "catastrophic" failure of logical inference.