Research reveals that only one-third of organizations engage in proper testing for AI application development, which is critical for ensuring quality and ROI. Most developers focus on chatbots and support tools, yet are neglecting crucial QA processes despite the rapid growth of AI. The study emphasizes human involvement in testing to address issues like bias and inaccuracy, with many necessary QA actions relying on human oversight. The lack of adversarial testing, such as red teaming, adds to potential risks in AI applications, underscoring the urgent need for improved testing standards.
"The results of our annual AI survey underscore the need to raise the bar on how we test and roll out new generative AI models and applications," said Chris Sheehan, EVP of high tech & AI at Applause.
Researchers found the top QA-related activities that involve human testing include prompt and response grading (61%), accessibility testing (54%), and UX testing (57%).
Despite an acceleration in AI application development, a concerning number of organizations are overlooking quality assurance (QA) efforts during the software development lifecycle.
A key talking point of the Applause study centered around human involvement in the development lifecycle. With developers ramping up the use of generative AI tools in their daily workflows, the need for a 'human touch' has become critical.
Collection
[
|
...
]