
"The latest analysis from the Google Threat Intelligence Group shows that malicious actors are no longer just exploring artificial intelligence but are actively integrating it into their operations. In a post on the Google Cloud website, the team describes how attackers are experimenting with model distillation, AI-assisted phishing, and automated malware development. The overview builds on the Google Threat Intelligence Group's earlier report titled Adversarial Misuse of Generative AI, which was the first systematic description of how state actors and cybercriminals use generative AI. Where that report mainly provided an inventory of misuse scenarios and patterns, this update focuses on further refinement, experiments with model distillation, and the growing integration of AI into attack chains."
"An important part of the report revolves around distillation attacks. In these attacks, parties attempt to reproduce or approximate an existing AI model by repeatedly querying it and systematically analyzing the responses. This allows the functionality of a commercial model to be replicated without direct access to the underlying technology. According to the researchers, this is not only a security risk, but also a threat to intellectual property. In addition, the team notes that AI is increasingly being used to generate and modify code. This does not mean that fully autonomous malware campaigns are taking place, but it does mean that developers can produce variants more quickly and attempt to circumvent detection mechanisms."
Malicious actors have moved from experimenting with AI to integrating generative models across attack chains, including target analysis, open-source data collection, multilingual spearphishing, and automated code generation. Model distillation enables cloning commercial models by repeatedly querying them and analyzing outputs, creating both security risks and intellectual property threats. AI-assisted tools speed the creation and modification of code, producing variants more quickly and aiding attempts to circumvent detection. The growing use of generative AI by state actors and cybercriminals increases operational efficiency and raises urgent concerns about model protection and defensive capabilities.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]