- A small number of samples can poison LLMs of any size Anthropic
- AI models can acquire backdoors from surprisingly few malicious documents Ars Technica
- Researchers find just 250 malicious documents can leave LLMs vulnerable to backdoors Engadget
- Adversarial prompt and fine-tuning attacks threaten medical large language models Nature
- Adversarial and Fine-Tuning Attacks Threaten Medical AI Bioengineer.org
Brought to you by Google News. Read the rest of the article here