The government's AI push needs clear accountability | Computer Weekly
Briefly

Accountability gaps threaten AI adoption because responsibility for harms is unclear across procurement and deployment. Suppliers often hide training data and algorithms as proprietary secrets, while procurement teams lack skills to assess AI-specific risks, leaving bias and explainability questions unasked. Clear lines of responsibility are necessary from procurement through deployment so suppliers own model shifts and buyers own data handling errors, such as introducing sensitive data into RAG systems. Government plans like a National Digital Exchange aim to centralize tech buying, but oversight failures and lack of procurement reform persist and need urgent attention.
However, there is a big elephant in the room. Without clear accountability frameworks, this 50-point roadmap risks becoming a cautionary tale rather than a success story. When an AI system hallucinates, exhibits bias or suffers a security breach, who takes responsibility? Right now, the answer is often 'it depends', and that uncertainty is innovation's biggest threat.
Indeed, having worked across government, education and commercial sectors for over two decades, I've seen how accountability gaps can derail even the most well-intentioned digital programmes. The government's AI push won't be different unless we get serious about establishing clear lines of responsibility from procurement through to deployment.
IT providers' opacity plays a significant role here. Many suppliers treat training data and algorithms as proprietary secrets, offering only high-level descriptions instead of meaningful transparency. Meanwhile, procurement staff often aren't trained to evaluate AI-specific risks, so critical questions about bias or explainability simply don't get asked.
Read at ComputerWeekly.com
[
|
]