My role was straightforward: write queries (prompts and tasks) that would train AI agents to engage meaningfully with users. But as a UXer, one question immediately stood out - who are these users? Without a clear understanding of who the agent is interacting with, it's nearly impossible to create realistic queries that reflect how people engage with an agent. That's when I discovered a glitch in the task flow. There were no defined user archetypes guiding the query creation process. Team members were essentially reverse-engineering the work: you think of a task, write a query to help the agent execute it, and cross your fingers that it aligns with the needs of a hypothetical "ideal" user - one who might not even exist.
I want to revisit the age old question about "button placement", to see how UX may have shifted, and how the technology we have now may have changed the way we consume content. And how that, in turn, impacts how buttons and UI elements are placed. If we read from left to right, where should the primary button go: left or right?
To navigate is to read the world in order to move through it, whether it means scanning a crowd to find a familiar face, deciphering the logic of a bookstore's layout, or following the stars at sea. This ability has always been mediated by tools (many of them disruptive and transformative). Still, the rise of artificial intelligence presents us with a radical promise: a world where we no longer need maps, because the information or the product 'comes to us.'
Most design problems aren't 'design' problems. They're 'Thinking' problems.They're 'Clarity' problems.They're 'Too-many-tabs-open' problems. More prototyping. More pixel-shifting. More polish in Figma alone isn't going to help you with those. For me, without clear thinking, Figma just results in more confusion, more mess, and more mockups than I can mentally manage. The Problem: Figma wasn't the bottleneck - my thinking was
AI design tools are everywhere right now. But here's the question every designer is asking: Do they actually solve real UI problems - or just generate pretty mockups? To find out, I ran a simple experiment with one rule: no cherry-picking, no reruns - just raw, first-attempt results. I fed 10 common UI design prompts - from accessibility and error handling to minimalist layouts - into 5 different AI tools. The goal? To see which AI came closest to solving real design challenges, unfiltered.
I actually started out thinking I wanted to be a graphic designer. I was really into anime as a kid, and when I got my hands on a (very outdated and pirated) copy of Photoshop 6 at around age 11, I was hooked. In high school, I also taught myself how to code, which opened the door to doing small freelance jobs here and there while I was still in school.
The thing is, the company I was working for had a dedicated photo team that provided beautiful, high-quality images with numerous contextual and action shots, perfect for web pages. So when what came to my desk was a classic full-page hero of an image with a gradient, I wasn't exactly surprised. But it did frustrate me that we couldn't come up with something more bold.