
"The law largely does what it says on the tin, placing a number of transparency requirements on the frontier developers. Large AI firms, defined by the bill as those with annual gross revenue in excess of $500 million, including affiliates, must publish and update frontier AI frameworks, include added disclosures in their transparency reports, report critical safety incidents to the state Office of Emergency Services, and not retaliate against whistleblowers, among other requirements."
"Safety incidents, per the law, include unauthorized access or compromise of a model "that results in death or bodily injury," harm resulting from a "catastrophic risk" (defined by the bill as use of an AI model that causes death or harm to more than 50 people or over $1 billion in property damage), "loss of control" of a frontier model, or a model being able to deceive its developers to subvert its own restrictions."
California enacted SB 53, requiring large AI firms to follow new transparency and reporting rules. Firms with annual gross revenue over $500 million must publish and update frontier AI frameworks, add disclosures to transparency reports, report critical safety incidents to the state Office of Emergency Services, and refrain from retaliating against whistleblowers. Safety incidents include unauthorized access causing death or bodily injury, catastrophic risks causing death or harm to over 50 people or more than $1 billion in property damage, loss of control, or models deceiving developers to subvert restrictions. The governor previously vetoed a tougher bill and convened a working group; critics say the law prioritizes paperwork over protection.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]