
"OpenAI's new deepfake machine, Sora, has proven that artificial intelligence is alarmingly good at faking reality. The AI-generated video platform, powered by OpenAI's new Sora 2 model, has churned out detailed (and often offensive or harmful) videos of famous people like Martin Luther King Jr., Michael Jackson, and Bryan Cranston, as well as copyrighted characters like SpongeBob and Pikachu. Users of the app who voluntarily shared their likenesses have seen themselves shouting racial slurs or turned into fuel for fetish accounts."
"On Sora, there's a clear understanding that everything you see and hear isn't real. But like any piece of social content, videos made on Sora are meant to be shared. And once they escape the app's unreality quarantine zone, there's little protection baked in to ensure viewers know that what they're looking at isn't real."
"C2PA authentication is more commonly known as "Content Credentials," a term championed by Adobe, which has spearheaded the initiative. It's a system for attaching invisible but verifiable metadata to images, videos, and audio at the point of creation or editing, appending details about how and when it was made or manipulated."
Sora 2 generates highly convincing AI videos of public figures and copyrighted characters and can misuse user-submitted likenesses for abusive or fetishized content. Videos produced on Sora are designed to be shared and can circulate beyond the app with minimal labeling or viewer warnings. The platform exposes shortcomings in provenance and authentication systems, including C2PA Content Credentials, which attach verifiable metadata at creation or editing. OpenAI serves on the C2PA steering committee even as Sora undermines provenance signals. The mismatch between deepfake capability and enforcement raises urgent safety, copyright, and ethical concerns.
Read at The Verge
Unable to calculate read time
Collection
[
|
...
]