Anthropic has addressed allegations regarding its Claude chatbot's use of an AI-generated, fabricated source in a legal filing related to music copyright issues. The citation error, submitted by data scientist Olivia Chen, led to claims that the source was entirely fictitious, implying the AI 'hallucinated' it. In a recent response, Anthropic's attorney emphasized that while there were citation inaccuracies, the underlying source was valid. The company expressed regret over the confusion caused by these errors, showcasing broader concerns about AI reliability in legal matters, evidenced by other recent courtroom incidents involving AI citation misuse.
Anthropic's Claude chatbot made an 'honest citation mistake' in a legal filing, sparking allegations that it fabricated sources in a copyright dispute.
The erroneous citation included in Anthropic's filing led to claims of 'hallucinated' sources in its defense against copyright infringement from music publishers.
Anthropic's attorney clarified that despite errors in citation details, the source itself was genuine, and the inaccuracies were due to AI errors rather than fabrication.
This incident demonstrates the rising challenges of AI-generated citations in the legal field, with prior cases indicating increased scrutiny on AI's reliability in legal documentation.
Collection
[
|
...
]