AI Slopsquatting: How LLM Hallucinations Poison Your Code | HackerNoon
Briefly

AI slopsquatting is a malware tactic that targets developers by exploiting inaccuracies in large language models (LLMs). Hackers create malicious packages using fictitious names generated by AI tools like ChatGPT, which can confidently recommend non-existent packages. In 2025, a study found that 20% of AI-generated code included such hallucinated packages, often repeating across runs, making them easier to exploit. As AI becomes integral to coding, developers must remain vigilant, as one wrong install can devastate projects and compromise security.
AI slopsquatting is a situation when hackers exploit AI's wild imagination. Sometimes, LLMs like ChatGPT or Grok invent fake package names that sound real but don't exist.
A 2025 study found that 20% of AI-generated code includes hallucinated packages, with 58% of those names repeating across multiple runs.
Read at Hackernoon
[
|
]