The first is Neural Execs, a known prompt injection attack that uses 'gibberish' inputs to trick the AI into executing arbitrary, attacker-defined tasks. These inputs act as universal triggers that do not need to be remade for different payloads.
"Shortly after access was terminated, we began receiving extortion demands. The criminals threatened to distribute materials from both the February 2025 incident and the recent incident to media outlets and on social media if we did not comply. We will not pay these criminals," Percoco stated.
[In] ShowDoc version before 2.8.7, an unrestricted and unauthenticated file upload issue is found and [an] attacker is able to upload a web shell and execute arbitrary code on server.
This extends to the software development community, which is seeing a near-ubiquitous presence of AI-coding assistants as teams face pressures to generate more output in less time. While the huge spike in efficiencies greatly helps them, these teams too often fail to incorporate adequate safety controls and practices into AI deployments. The resulting risks leave their organizations exposed, and developers will struggle to backtrack in tracing and identifying where - and how - a security gap occurred.
We also patched two potential denial-of-service vulnerabilities when handling large, malformed inputs. One exploits inefficient string concatenation in header parsing under ASGI ( CVE 2025-14550). Concatenating strings in a loop is known to be slow, and we've done fixes in public where the impact is low. The other one ( CVE 2026-1285) exploits deeply nested entities. December's vulnerability in the XML serializer ( CVE 2025-64460) was about those very two themes.
"These incidents involve the intentional use of deceptive or illegal practices to fraudulently obtain money, assets, or information from individuals or institutions, and include actions carried out over cyber channels."
The most dangerous assumption in quality engineering right now is that you can validate an autonomous testing agent the same way you validated a deterministic application. When your systems can reason, adapt, and make decisions on their own, that linear validation model collapses.