In recent weeks, researchers have spotlighted a new frontier in AI security that is as intriguing as it is concerning. Indirect prompt injections—attacks that manipulate the boundary between developer-defined instructions and external inputs—have been a known vulnerability for large language...