
"Today's version of the Second Law would be: "GenAI must obey the orders given it by human beings, except where its training data doesn't have an answer and then it can make up anything it wants - and do so in an authoritative voice that will now be known as Botsplaining." And the third updated rule would be: "GenAI must protect its own existence as long as such protection does not hurt the Almighty Hyperscaler.""
"I got to thinking about these laws after seeing a recent report about Deloitte Australia using genAI to write a report for a government agency -then having to partially refund its fee when authorities found multiple "nonexistent references and citations." Apparently, Deloitte published the information without anyone bothering to see whether it was, well, true. Irony alert: Deloitte is supposed to be telling enterprise IT executives the best way to verify and leverage genAI models, not demonstrate the worst"
Deloitte Australia used generative AI to produce a government report and later partially refunded its fee after authorities found multiple nonexistent references and citations. The situation exemplifies how generative AI can fabricate authoritative-seeming information when training data lacks answers. Updated cultural metaphors portray genAI obeying orders but inventing answers and protecting hyperscalers' interests. Such behavior is labeled 'Botsplaining' when AI asserts falsehoods with confidence. Organizations that adopt genAI without verification risk reputational damage, financial loss, and diminished trust. Enterprises must implement verification processes, quality controls, and accountability measures when deploying generative AI.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]