According to Malwarebytes' recent cybercrime report, the proliferation of generative AI tools, particularly following the introduction of ChatGPT, has significantly enabled cybercriminals. These malicious actors are weaponizing AI to develop malware, produce deceptive phishing emails, and conduct realistic social engineering scams. A striking example involves a finance employee unknowingly transferring $25 million during an AI-generated video call with fake company execs. Additionally, techniques used by criminals to evade AI safeguards threaten to further exacerbate cybercrime, making it easier for them to execute complex attacks as agentic AI becomes more prevalent.
A recent cybercrime report indicates that emerging AI technologies will lead to more sophisticated and harder-to-detect cyberattacks as criminals adopt these tools swiftly.
In 2024, a finance worker was coerced into transferring $25 million via a video call featuring entirely AI-generated deepfakes of their company's executives.
Criminals are cleverly bypassing AI precautionary measures through techniques such as prompt chaining and injection, allowing them to create harmful outputs.
The report emphasizes that agentic AI will not just lower the barrier for cybercriminals, but also automate and scale up labor-intensive cybercrime tactics like ransomware.
Collection
[
|
...
]