Transport for London tested AI-powered CCTV at Willesden Green station from October 2022 to September 2023, routing video feeds through automated detection systems. The system aimed to detect fare evasion, aggressive gestures, and safety risks but generated more than 44,000 alerts, nearly half of them false or misdirected. Children following parents through ticket barriers triggered fare-dodging alarms, and algorithms confused folding bikes with standard bicycles. Staff received over 19,000 real-time alerts requiring manual review because AI could not distinguish appearance from intent. Vision AI reliably detects motion patterns but struggles with nuance, ambiguity, and cultural variation, increasing false alarms as deployment expands.
Last year, Transport for London tested AI-powered CCTV at Willesden Green tube station, running feeds through automated systems from October 2022 to September 2023. According to Wired, the goal was to detect fare evasion, aggressive gestures, and safety risks. Instead, the system generated more than 44,000 alerts-nearly half of them false or misdirected. Children following parents through ticket barriers triggered fare-dodging alarms, and the algorithms struggled to distinguish folding bikes from standard ones.
The impact was immediate: Staff faced 19,000-plus real-time alerts requiring manual review, not because problems existed, but because the AI could not distinguish between appearance and intent. Trained to watch motion and posture, not context, the system exposed a deeper flaw at the core of many AI tools today. As AI spreads into daily life-from shops to airports-its inability to interpret why we move, rather than simply how, risks turning ordinary human behavior into false alarms.
Collection
[
|
...
]