Meta declines to abide by voluntary EU AI safety guidelines
Briefly

The European Commission issued voluntary guidelines for general-purpose AI providers ahead of the EU AI Act's enforcement, aiming to ensure safety and transparency in AI models. The Act focuses on four risk categories to prevent harmful content amplification. Meta declined to endorse these guidelines, arguing they add legal uncertainty. The General-Purpose AI Code of Practice requests transparency from AI providers trained with significant computing resources. Meta's decision allows its high-capacity model, Llama 4 Behemoth, to operate without restrictions, raising concerns amid existing criticism of the company's practices in the EU.
"With today's guidelines, the Commission supports the smooth and effective application of the AI Act," Henna Virkkunen, EVP for tech sovereignty, security and democracy, said in a statement on Friday.
"By providing legal certainty on the scope of the AI Act obligations for general-purpose AI providers, we are helping AI actors, from start-ups to major developers, to innovate with confidence, while ensuring their models are safe, transparent, and aligned with European values."
The EU AI Act regulates the use of AI models based on four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk.
Meta, long criticized for its data-hungry tactics in the EU, doesn't want to play along, however. Meta says it will ignore the GPAI, a stance that allows its Llama 4 Behemoth (5e25 FLOPs) to roam unhindered.
Read at Theregister
[
|
]