AI crawlers cause Wikimedia Commons bandwidth demands to surge 50% | TechCrunch
Briefly

The Wikimedia Foundation announced that bandwidth consumption for downloads from Wikimedia Commons has jumped by 50% since January 2024. This surge is attributed not to human users seeking knowledge, but to automated scrapers that are intensively downloading data to train AI models. Most notably, 65% of the most resource-intensive traffic is originating from these bots, creating challenges as their requests often hit the more costly core data center. The influx of bot traffic is prompting the Wikimedia team to invest time in blocking these scrapers to prevent further strain on resources.
The Wikimedia Foundation stated that bandwidth consumption for multimedia downloads has surged by 50% due to automated scrapers rather than increased human traffic.
Nearly two-thirds of the costly traffic from Wikimedia Commons comes from bots, which demand more expensive data from the core data center.
Read at TechCrunch
[
|
]