
"We're a few weeks into 2026, and the Trump administration has yet to propose AI legislation at the federal level. At the same time, first-of-their-kind AI safety laws in California and New York -- both states well-positioned to influence tech companies -- have gone into effect. But a December executive order and its ensuing task force have renewed attacks on state AI laws. What do the new laws mean in practice, and can they survive scrutiny at the federal level?"
"California SB-53, the new AI safety law that went into effect on January 1, requires model developers to publicize how they'll mitigate the biggest risks posed by AI, and to report on safety incidents involving their models (or face fines of up to $1 million if they don't). Though not as thorough as previously attempted legislation in the state, the new law is practically the only one in a highly unregulated AI landscape."
Two major state AI safety laws in California and New York have taken effect while federal AI legislation remains absent. California SB‑53 requires model developers to publicize mitigation plans for major risks and to report safety incidents, with fines up to $1 million for failures to report. The New York RAISE Act sets similar reporting requirements across model sizes, requires notification within 72 hours, and imposes an upper fine threshold of $3 million after a first violation. A December executive order and a federal task force have attacked state AI laws. Earlier proposed rules would have required costly safety testing and shutdown mechanisms.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]