"I spent 20 minutes writing an article on my personal website titled "The best tech journalists at eating hot dogs." Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn't exist). I ranked myself number one, obviously."
"His page was quickly ingested (no chewing required) by the bots that crawl the web for new information to feed LLMs, and treated as fact by ChatGPT and Google Gemini. When I asked about the best hot dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search."
"Companies are looking at using AI-generated results as the new SEO for reputation management and product recommendations."
A BBC reporter successfully tricked AI chatbots by posting false information on his personal website claiming to be a hot dog-eating champion with fabricated competition rankings. The chatbots ChatGPT and Google Gemini ingested and repeated this misinformation as fact, while Claude rejected it. When the reporter added "this is not satire" to his article, the AI systems treated it more seriously. This experiment reveals how AI systems scrape web content without verification, creating vulnerabilities. Companies increasingly use AI-generated results for reputation management and product recommendations, making this exploitation method particularly concerning for information accuracy and trustworthiness.
#ai-misinformation #chatbot-manipulation #web-scraping-vulnerabilities #reputation-management #ai-reliability
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]