
"An AI tool user, quite rationally, begins by giving the AI simple tasks-perhaps looking up facts or summarizing long documents. If the AI does a good job of these things, the user naturally thinks "what else can I do with this?" They may give the AI a slightly harder task. If the AI continues to get things right, trust grows. If the AI fails or provides a low-quality answer, the user will think twice about trying to automate the task next time."
"Today's AI chatbots, which are powered by large generative AI models, are far better than the ones we had in 2023 and 2024. But AI tools are just beginning to build trust with most users, and most C-suite executives who hope the tools will streamline business functions. My own trust of chatbots grew in 2025. But it has also diminished."
Competence trust is the belief that an AI is accurate and does not hallucinate facts, and it determines whether users will expand AI use beyond simple tasks. Users typically start with straightforward requests like fact retrieval or document summaries; consistent accuracy encourages them to try harder tasks, while failures reduce willingness to automate. Modern large generative models improved since 2023–24 but still must rebuild competence trust to gain broad business and consumer confidence. Individual experiences with chatbots can raise or erode trust when models admit errors yet repeat incorrect claims. Rebuilding reliable accuracy is essential for meaningful AI adoption.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]