
"The precision of classical attacks drops very fast, explaining its low recall. In contrast, the precision of LLM-based attacks decays more gracefully as the attacker makes more guesses. The classical attack almost fails completely even at moderately low precision. In contrast, even the simplest LLM attack (Search) achieves non-trivial recall at low precision, and extending it with Reason and Calibrate steps doubles Recall @99% Precision."
"LLMs, while still prone to false positives and other weaknesses, are quickly outstripping more traditional, resource-intensive methods for identifying users online."
"If LLMs' success in deanonymizing people improves, governments could use the techniques to unmask online critics, corporations can assemble customer profiles for hyper-targeted advertising, and attackers could build profiles of targets at scale to launch highly personalized social engineering scams."
Researchers tested deanonymization techniques using 10,000 candidate profiles with 5,000 distraction identities and query distractors. LLM-based attacks substantially outperformed classical baseline methods modeled on the Netflix Prize attack. While classical attacks lose precision rapidly and fail at moderate precision levels, LLM attacks decay more gracefully and achieve non-trivial recall even at low precision. Simple LLM search methods combined with reasoning and calibration steps doubled recall at 99% precision. The researchers proposed mitigations including API rate limits, scraping detection, bulk export restrictions, and LLM provider guardrails. Potential threats include government unmasking of critics, corporate hyper-targeted advertising, and personalized social engineering attacks.
#llm-deanonymization-attacks #privacy-and-anonymity #cybersecurity-threats #mitigation-strategies #online-identification
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]