Videos made with OpenAI's Sora app are flooding TikTok, Instagram Reels and other platforms, making people increasingly familiar and fed up with nearly unavoidable synthetic footage being pumped out by what amounts to an artificial intelligence slop machine. Digital safety experts say something else that is happening may be less obvious but more consequential to the future of the internet: OpenAI has essentially rebranded deepfakes as a light-hearted plaything and recommendation engines are loving it.
Senate Commerce Committee Chair Ted Cruz (R-TX) was one of a handful of Republicans to speak out against the "mafioso"-like comments Federal Communications Commission Chair Brendan Carr issued ahead of Jimmy Kimmel's suspension last month. Now, he's hoping the liberal outrage over government censorship will fuel bipartisan support for a new bill letting Americans sue over speech violations. Cruz plans to introduce the bill in the coming weeks, he told The Wall Street Journal in an interview.
There are a few iconic things that will always signal we're watching a James Bond movie: A Martini, shaken not stirred; our hero behind the wheel of a gorgeous Aston Martin; and, of course, an opening sequence that features 007 pointing a gun directly at the audience and shooting, with a classic theme song blaring as a blood red animation drips down the screen.
It's becoming increasingly clear that OpenAI put staggeringly little thought into the rollout of Sora 2, its latest text-to-video generating app, a "move fast and break things" approach that has resulted in plenty of drama. Last week, the Sam Altman-led company released the TikTok-style app that churns out endless feeds of low-rent and mind-numbing AI slop. It's an " unholy abomination" that intentionally encourages users to generate deepfakes of others,
Meta has been getting content moderation wrong for years, like most platforms that host user-generated content. Sometimes it's a result of deliberate design choices-privacy rollbacks, opaque policies, features that prioritize growth over safety-made even when the company knows that those choices could negatively impact users. Other times, it's simply the inevitable outcome of trying to govern billions of posts with a mix of algorithms and overstretched human reviewers.
Without directly naming SafetyCore, Google explained that the optional setting can blur photos that may contain nudity and display a warning before you view or share them. Sensitive Content Warnings appears to use SafetyCore to analyze images locally on your device. Google has emphasized that SafetyCore runs entirely on your phone -- images don't leave your device, and Google doesn't know if nudity was flagged.
Google appears to have blocked AI search results for the query "does trump show signs of dementia" as well as other questions about his mental acuity, even though it will show AI results for similar searches about other presidents. When making the search about President Trump, AI Overviews will display a message that says, "An AI Overview is not available for this search":
"I found out that I didn't get booked for a Harry Potter convention recently, as I'm now doing OnlyFans. They explained it was because it's a 'family show and OnlyFans is affiliated with porn,'" she wrote. "This was baffling to me as some actors who do conventions (most actors, actually) have done TV and films in which they've done sex scenes and nudity. I'm just playing with my hair!"
The administration's announcement of the campaign explains it will target social media platforms, short video services, the livestreaming platforms used by Chinese e-commerce sites to host infomercials and even delve into comments left across the internet. Some of the material Beijing wants to root out - such as content that incites violence - would likely fall foul of content moderators in many nations.
For decades, Mark Lemley's life as an intellectual property lawyer was orderly enough. He's a professor at Stanford University and has consulted for Amazon, Google, and Meta. "I always enjoyed that the area I practice in has largely been apolitical," Lemley tells me. What's more, his democratic values neatly aligned with those of the companies that hired him. But in January, Lemley made a radical move. "I have struggled with how to respond to Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness," he posted on LinkedIn. "I have fired Meta as a client."