YouTube expands AI deepfake detection for politicians, government officials, and journalists | TechCrunch
Briefly

YouTube expands AI deepfake detection for politicians, government officials, and journalists | TechCrunch
"This expansion is really about the integrity of the public conversation. We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we're also being careful about how we use it."
"Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures."
"Not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression."
YouTube is expanding its likeness detection technology to identify AI-generated deepfakes, launching a pilot program for government officials, political candidates, and journalists. The technology, which previously served 4 million YouTube creators, detects simulated faces created with AI tools that spread misinformation by impersonating public figures. Similar to YouTube's Content ID system for copyright detection, the likeness feature identifies unauthorized AI-generated content. YouTube balances free expression with AI risks by evaluating removal requests under existing privacy guidelines, protecting parody and political critique as legitimate forms of expression. The company emphasizes this expansion addresses integrity concerns in public discourse, particularly for civic figures vulnerable to AI impersonation.
Read at TechCrunch
Unable to calculate read time
[
|
]