Engineers Deploy "Poison Fountain" That Scrambles Brains of AI Systems
Briefly

Engineers Deploy "Poison Fountain" That Scrambles Brains of AI Systems
"which instead advocates for poisoning the resource that the AI industry needs most, in a bid to cut off its power at the source. Called Poison Fountain, the project aims to trick tech companies' web crawlers into vacuuming up "poisoned" training data that sabotages AI models. If pulled off at a large enough scale, it could in theory be a serious thorn in the AI industry's side - turning their billion dollar machines into malfunctioning messes."
""We agree with Geoffrey Hinton: machine intelligence is a threat to the human species," reads a statement on the project's website, referring to the British computer scientist who is considered a godfather of the field, and who has become one of the industry's most prominent critics. "In response to this threat we want to inflict damage on machine intelligence systems.""
Poison Fountain aims to contaminate the web-scraped data that modern AI models rely on by tricking tech companies' web crawlers into collecting deliberately poisoned training material. Successful large-scale poisoning could degrade or malfunction expensive AI systems by corrupting their training inputs. The project launched recently and its members reportedly work for major US AI companies. Project participants frame their goal as responding to perceived existential risks from machine intelligence. The modern AI boom depended on massive internet scraping of freely available information, a practice that raised ethical and legal concerns including copyright lawsuits. Previous defensive efforts have tried embedding disruptive data into images and other content to foil model ingestion.
Read at Futurism
Unable to calculate read time
[
|
]