UX design
fromMedium
1 day agoOh, but there's one more thing
Designers must address real project challenges while navigating the evolving role of AI in the creative process.
Santa Cruz de Tenerife is one of the most idyllic cities in the Canary Islands. At its heart stands the jewel - the Auditorio. It's a place where talent from both worlds, New and Old, comes together. A theatre, opera, dance, and music heaven.
For decades in SAAS, products reduced ambiguity. Users supplied constrained inputs, and the system handled the output. It's never been Minority Report cinematic, but it was predictable. By providing predictable environments for manipulating data, users learned by moving things, adjusting variables - and the outcome emerged through interaction.
Imagine a user opening a mental health app while feeling overwhelmed with anxiety. The very first thing they encounter is a screen with a bright, clashing colour scheme, followed by a notification shaming them for breaking a 5-day "mindfulness streak," and a paywall blocking the meditation they desperately need at that very moment. This experience isn't just poor design; it can be actively harmful. It betrays the user's vulnerability and erodes the very trust the app aims to build.
Today we are at the cusp of revolutions in artificial intelligence, autonomous vehicles, renewable energy, and biotechnology. Each brings extraordinary promise, but each introduces more complexity, more interdependence, and more latent pathways to failure. This elevates prudence to be critical. Good design recognizes what cannot be foreseen. It acknowledges the limits of prediction and control. It builds not merely for performance, but for recovery.
AI is disrupting more than the software industry, and is doing so at a breakneck speed. Not long ago, designers were deep in Figma variables and pixel-perfect mockups. Now, tools like v0, Lovable, and Cursor are enabling instant, vibe-based prototyping that makes old methods feel almost quaint. What's coming into sharper focus isn't fidelity, it's foresight. Part of the work of Product Design today is conceptual: sensing trends, building future-proof systems, and thinking years ahead.
Autonomy is an output of a technical system. Trustworthiness is an output of a design process. Here are concrete design patterns, operational frameworks, and organizational practices for building agentic systems that are not only powerful but also transparent, controllable, and trustworthy. In the first part of this series, we established the fundamental shift from generative to agentic artificial intelligence. We explored why this leap from suggesting to acting demands a new psychological and methodological toolkit for UX researchers, product managers, and leaders.
Your junior designer spins up a prototype in Lovable before lunch. Your PM shows you a "working" MVP built entirely with Cursor within a day. And your CEO forwards you a LinkedIn post about how AI will replace 80% of UI work by 2026. And it seems like anyone can now make an app to solve a specific problem. Has the graphical interface really died, as Jakob Nielsen provocatively suggests?
The question dropped into the Slack channel before the user research summary. Before the problem was clearly defined. Before anyone asked if users actually needed this feature. Your product manager already generated three interface options in ChatGPT. Now they're asking which one to build. Not whether to build. Not why to build. Which. And when you slow the conversation down to ask those questions, you're about to discover that strategic thinking now reads as bottleneck behavior.
My role was straightforward: write queries (prompts and tasks) that would train AI agents to engage meaningfully with users. But as a UXer, one question immediately stood out - who are these users? Without a clear understanding of who the agent is interacting with, it's nearly impossible to create realistic queries that reflect how people engage with an agent. That's when I discovered a glitch in the task flow.