llm poisoning

  1. Small Sample Poisoning: 250 Documents Can Backdoor LLMs in Production

    Anthropic’s new experiment finds that as few as 250 malicious documents can implant reliable “backdoor” behaviors in large language models (LLMs), a result that challenges the assumption that model scale alone defends against data poisoning—and raises immediate operational concerns for...