Court System Says Hallucinating AI System Is Ready to Be Deployed After Dramatically Lowering Expectations
Briefly

Court System Says Hallucinating AI System Is Ready to Be Deployed After Dramatically Lowering Expectations
"We had trouble with hallucinations, regardless of the model, where the chatbot was not supposed to actually use anything outside of its knowledge base," Aubrie Souza, a consultant with the National Center for State Courts (NCSC), told NBC News. "For example, when we asked it, 'Where do I get legal help?' it would tell you, 'There's a law school in Alaska, and so look at the alumni network.' But there is no law school in Alaska."
"Through our user testing, everyone said, 'I'm tired of everybody in my life telling me that they're sorry for my loss," Souza said. "So we basically removed those kinds of condolences, because from an AI chatbot, you don't need one more."
"Exhibiting a failing inherent to all large language models, the esteemed virtual assistant kept hallucinating, or making up facts and sharing exaggerated information, according to the people involved in its development."
An AI chatbot called the Alaska Virtual Assistant was created to help users with probate forms and procedures. Testing revealed frequent hallucinations: the chatbot made up facts and provided exaggerated information, including suggesting nonexistent resources such as a law school in Alaska. Users reported frustration because the assistant bungled simple questions and offered insincere condolences that felt cloying. Developers removed scripted condolences after testers expressed annoyance. The project demonstrates challenges of deploying large language models in sensitive legal contexts where accuracy and tone are critical.
Read at Futurism
Unable to calculate read time
[
|
]