AI-Powered Adversaries Require AI-Driven Defenses
Briefly

AI-Powered Adversaries Require AI-Driven Defenses
"OPINION - The use of artificial intelligence by adversaries has been the subject of exhaustive speculation. No one doubts that the technology will be abused by criminals and state actors, but it can be difficult to separate the hype from reality. Leveraging our unique visibility, Google Threat Intelligence Group (GTIG) has been able to track the use of AI by threat actors, but the pace of change has made it challenging to even forecast the near future. However, we are now seeing signs of new evolutions in adversary use, and hints at what may lie ahead in the near future. Most importantly though, there are opportunities for defensive AI to help us manage these future threats."
"Evolution Thus Far Over the course of the last eight years, GTIG has observed AI-enabled activity evolve from a novel party trick to a staple tool in threat actors' toolbelts. In the early days, we detected malicious actors embracing the nascent technology to enhance their social engineering capabilities and uplift information operations . The ability to fake text, audio, and video was quickly abused by threat actors. For instance, several adversaries use GAN images of people that don't exist to create fake personas online for social engineering or information operations campaigns (this negates the use of real photos in these operations, which could often be foiled when the photo was researched). A poor deepfake of Volodymyr Zelensky was in an effort to convince Ukrainians that he had capitulated in the early hours of the full scale Russian invasion in 2022."
"By investigating adversary use of Gemini we have some additional insight into how AI is being . We have observed threat actors using Gemini to help them with a variety of tasks like conducting research and writing code. Iranian actors have used it for help with error messages and creating python code for website scraping. They have also used it to research vulnerabiliti"
Adversaries have steadily integrated AI into malicious operations over the past eight years, turning it from a novelty into a standard tool. AI enables sophisticated social engineering, information operations, and the creation of convincing fake personas using GAN-generated images and deepfakes. Threat actors have used AI to fabricate text, audio, and video to influence and deceive target audiences. Actors also employ models like Gemini to research vulnerabilities, write code, assist with error messages, and automate tasks such as website scraping. Defensive AI capabilities present opportunities to detect, mitigate, and respond to these evolving adversary techniques.
Read at The Cipher Brief
Unable to calculate read time
[
|
]