
"Anthropic is scrambling to contain a leak of its Claude Code AI model's source code by issuing a copyright takedown request for more than 8,000 copies of it - a gallingly ironic stance for the company to be taking, considering how it trained its models in the first place."
"The leak isn't considered to be an outright disaster; no customer data was exposed, Anthropic says, nor were the internal mathematical 'weights' that determine how the AI 'learns' and which distinguish it from other models."
"It's certainly within Anthropic's right to issue the takedown request, but the hypocrisy of Anthropic running to the law to protect its intellectual property is plain to see, especially for a company that's relentlessly positioned itself as the ethical adult in the room."
"To do that, it first relied on digital books. But it didn't pay for them or choose only to use ones in the public domain. Instead, it downloaded millions of pirated volumes from the online 'shadow library' LibGen."
Anthropic is facing a source code leak of its Claude Code AI model and has issued a copyright takedown request for over 8,000 copies. The leak revealed techniques used to make the AI operate effectively but did not expose customer data or internal weights. Initially, the takedown request was broad but was later narrowed to 96 copies. Despite its right to protect intellectual property, the situation underscores the irony of Anthropic's ethical positioning, given its past reliance on pirated digital books for training data.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]