
"The aim is to curb the rapid spread of manipulated images and videos used for abuse, harassment, and criminal exploitation. The initiative will be led by the Home Office and developed in collaboration with major technology companies, academic researchers, and technical experts. Ministers say the framework will explore how emerging technologies can better recognise and assess deepfakes, while also setting clear expectations for how companies detect and respond to them."
"Government figures suggest the scale of the problem has expanded dramatically. Officials estimate that up to eight million deepfake images were shared in 2025 alone, compared with around 500,000 in 2023. Much of the growth is believed to be driven by easily available AI tools that allow users to generate explicit or degrading images with minimal technical knowledge. Victims' advocates say the impact can be devastating, particularly where sexualised images are created and distributed without consent."
The Home Office will lead development of a 'world-first' deepfake detection framework in collaboration with major technology companies, academic researchers, and technical experts. The framework will explore how emerging technologies can recognise and assess deepfakes and will set expectations for how companies detect and respond to manipulated media. The move responds to rising misuse of AI tools that generate sexualised images without consent, including of children. Government figures estimate up to eight million deepfake images were shared in 2025, up from around 500,000 in 2023. Victims' advocates warn that non-consensual sexualised media causes long-lasting psychological, professional, and reputational harm.
Read at TechRepublic
Unable to calculate read time
Collection
[
|
...
]