Human x Intelligent
In a world where technology transforms faster than our environment, we can make sense of it. Human × Intelligent invites you to pause, think and design the future with intention.
We explore the intersection of humanity and intelligence: how leaders, creators and systems can co-create meaningful impact.
Conversations, frameworks and ideas that unite purpose, ethics and innovation.
The future of product is human × intelligent.
Human x Intelligent
The end of the chatbot? Designing AI interfaces that act and not just answer
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The End of the Chatbox? Designing AI interfaces that act, not just answer
In this episode of Human × Intelligent, Madalena Costa explores one of the biggest design shifts happening in AI products right now: the possible end of the chatbox as the default interface for artificial intelligence.
Chat interfaces made AI accessible to millions of people. They are familiar, flexible and great for brainstorming, research, writing and exploration. But as AI systems become more agentic, able to plan, use tools, act across workflows and move work forward, the traditional chatbox starts to reveal its limitations.
When AI moves from answering questions to taking actions, the design problem changes.
This episode explores why chat interfaces can become inefficient inside real workflows and what product designers, UX professionals and product teams should start learning now to design more embedded, contextual and trustworthy AI experiences.
We discuss:
- Why chat became the dominant AI interface
- Why chat breaks in action-based workflows
- The shift from conversation interfaces to action-driven experiences
- UX patterns for agentic systems: previews, rationale, progress, undo and adjustable autonomy
- How designers can move from building chat interfaces to designing human-AI collaboration
The future of AI interfaces is likely not 'no UI' or invisible magic. It’s embedded intelligence that supports work directly inside the product experience.
🎙️ Human × Intelligent - a podcast about trust, transparency and human agency in AI systems, for product designers, PMs and founders building with AI.
🔔 Subscribe so you don't miss the next episode
🌐 humanxintelligent.com
Hosted by Madalena Costa · Senior product designer and AI systems strategist
Hello and welcome back to Human X Intelligence. I am Madalena Costa and today I want to talk about something that I think product designers, UX professionals, founders, product managers, and really anyone building AI products should be paying close attention to right now, which is the possible end of the shed box as a default interface for artificial intelligence. Now, to be very clear from the beginning, I do not think that shed is completely over. I still think that shed is useful. I still think it works extremely well for some use cases. And I definitely do not think that every AI experience is suddenly going to become invisible overnight. But I do think something important is happening, and we need to talk about it. As AI becomes more energetic, meaning it can plan, act, use tools, make decisions across steps, and operate with more autonomy, the traditional chat box starts to feel less like the final destination and more like a transactional interface. And that is a very different claim. So this episode is really about that shift. What changes when AI moves from answering to acting? Why does the chat interface start to break in certain contexts? And what should product designers and UX teams start learning now? Also, how do we design experiences that feel more natural, trustworthy, embedded in our day-to-day, and useful without losing control, clarity, or even human agency? Because I think this is one of the biggest design shifts happening right now. And this is not because the interface is disappearing, like I'm saying, but because the role of interface is actually changing. Why shed won in the first place? So before we talk about where things are going, I think it's important to understand why sheds became the dominant pattern in the first place. And honestly, it makes total sense. Let's see why. So a shette is very familiar. It's low friction, it is very flexible, it is relatively fast to build. And it gives users a blank field and a simple interaction model that most people already understand. So you type something, the system responds, you keep going. In that sense, a chat was probably the fastest and most accessible way to put powerful AI into the hands of millions of people, which it was. Shed GPT was one of the most quick, one of the quickest ones to get millions of users. So that kind of says something about it. But it also lowered the barrier. So it made artificial intelligence feel conversational, human, approachable, and immediate. So it was already there right away. And for a lot of use cases, that was enough. You search, you brainstorm, you write things, you write on support, you reflect, you do research, you do quick synthesis, directive thinking, for example. And for those kind of tasks, a shed works really well because the user is actively trying to explore, refine, and co-think true language. So this is not really an anti-shed conversation or argument. It is more of an anti-shed for everything argument, if that makes sense. Because once we move beyond open-end conversations and into workflows, executions, embedded actions, and multi-step orchestration, a shed starts showing its limitations very quickly. And the moment artificial intelligence starts doing more than responding, the design problem changes. If a system can take actions, monitor progress, use tools, move across systems, or even proactively help inside of software, then asking the user to constantly open a shed panel, explain context, write a prompt, clarify the task, and interpret a stream of responses starts to feel inefficient. In some cases, it even becomes a bad friction. And I think this is the key shift. So we are moving from conversations of the interface to action as an experience. That means the user is no longer just asking questions. They are trying to complete something. They want to edit a document, organize a research, try out the request, summarize meetings, right? Restructure the dashboard, generate the first pass, let's say it like this. Move work forward, uh make decisions, you know, reduce effort. If that is the job, then a detached conversation is often not the best form of factor because it also generates a lot of friction to find things, right? Shed it creates basically a few recurring problems. First, it asks users to be articulate. The system may be very powerful, but the user still has to know how to ask. They have to write clearly, structure intent, provide context, think through the request, sometimes even think like the model, having a conversation with a model. And that is fine for advanced users and even certain tasks that a normal person can do if they don't want to explore more, but it is not a great default for every person and every workflow, especially the complex, complex, complex ones. And something else, the second one would be like the shed hides too much context in a track. A long conversational stream can bury what matters. Users miss important signals, they forget what happened, they struggle to compare outputs, and the interface starts pulling attention away from the actual work. Another one, which is also a very interesting one, which is a shed often leaves outside the workflow. It becomes a detour. Instead of intelligence being embedded in the place where work is happening, it sits in a side panel, a bubble, or a separate product. Right now, Claude is doing a very interesting work when it comes to these kind of workflows. I advise you to try and explore a little bit and give some integrations, like for example, Google Calendar, to see how it works for you as well. I've been testing a lot with this and I'm more than happy to explore a little bit in one of the episodes if this is something that interests you. If it is, please do let me know, send me a message or reply. Something like this, I'm very interested in knowing also your perspective on it. But so now the user is not only doing the task, they are also translating the task into a prompt. And that is a very important distinction. So a lot of artificial intelligence product experiences today do not reduce friction, they relocate friction into language. So this is where the design opportunity becomes much more interesting. So, designers, let's pay attention. This is very good. So, traditional software was largely command-based. You tell the computer exactly what to do, click disk, open that, moves there, apply this filter, create this file, but agenting artificial intelligence changes the interaction model. Now the user can increasingly express intent rather than manually specify every step. So instead of saying click here, then there, then summarize those notes, then group them by team, then turn them into actions, the user can say, help me turn this messy research into a prioritized action plan. And that is a huge, huge opportunity. And she shift as well. So when that happens, the interface should also evolve because if the system understands intent and can act across steps, then the design challenge is no longer about giving people more places to type. It becomes about helping people then express goals, understand what the system is doing, review important decisions, adjust autonomy, recover from mistakes, stay in control without doing all the work themselves, and that is the real design space. And I do think a lot of people are using phrases like the end of UI or no UI in a way that sounds dramatic, but it is not actually very helpful. Because the interface does not really disappear, and we need to talk about this. Because what changes is that the interface becomes less centered on explicit manual input and more centered on guidance, visibility, and orchestration. So rather than asking how do we remove the interface, a better question would be how do we reduce unnecessary interface friction while keeping trust, clarity, and control for the user? And that is much better framing, right? But do let me know what you think about it. But in practice, it means a few things differ. So instead of static pages, we might get more dynamic, context-shaped components. And instead of always asking the user for a prompt, you may infer some needs from the workflow. And instead of a giant side chat, we may use inline actions, previews, smart suggestions, and activity indicators. Because instead of artificial intelligence becoming a separate assistant, it becomes part of the product itself. So the future is probably not invisible magic everywhere. It is more likely embedded intelligence, contextual actions, generate interface moments, proactive support when appropriate, and clear safety patterns around action that is much more realistic and much more useful for designers to work with. Now, this part matters a lot because I do not want to create a false binary. So chat is very, very good for some things, especially when the task is open-ended, exploratory, reflective, ambiguous, conversational by nature. For example, I am brainstorming an idea, or exploring a topic, or asking a follow-up question, even so, drafting or rewriting something, like emails that a lot of people are doing right now, um, researching possibilities, sense making, thinking through strategy, learning, you see, there's a lot of tasks that they can do. But in those moments, the back and forth is very valuable. The conversation itself helps the user get somewhere. So the shed still has a place. But when the task becomes repetitive, embedded in an existing workflow, highly contextual, action-heavy, easy to verify, something the system can partly execute. Then the shed often stops being the best answer. And that is when embedded energetic patterns become much more powerful. Once the system can do things, not just say things, the core UX pattern starts changing. And I think this is where product teams need to level up. Because designing for a gentle artificial intelligence is not just about adding a smartest system. It is about designing a relationship between the human and the system. The relationship needs a few things. It needs intent previews. So before an agent acts, the user often needs a clear summary of what it is about, not hiding the reasoning and not being vague or giving vague promises. Actually having a practical preview, something like I found 12 duplicate tasks and I'm ready to merge them. I group these support tickets into four teams and drafted labels. I could schedule these interviews and prepare summary documents. This matters a lot because action becomes visible and feels less risky, which Clause does it very well. There's other tools that do it, but right now I'm testing a lot with Claude and OpenClaw. So those are the ones that I'm right now. But please do share your workflows. What are you doing? I'm very curious. But the number two is rationale. So users do not always need a full chain of reasoning, but they often need enough explanation to understand why something happened like that. Why was this ticket prioritized? Why was this insight grouped here? Why did the system recommended this action? Because trust is not only about being correct, it is about being understandable. Users need to trust you, they need to understand what is happening. Number three is status and progress. So if an agent works in the background, the product has to communicate that worked well. What is happening right now? What is waiting? What finished? What needs approval and what failed? A good agetic experience is based or needs to have an equivalent of progress language, not just the loading spinners or real status communication. You know? Fourth, audit and undo. This is a non-negotiable. So audit and undo. It's very important because if the system can act, they also get things wrong, as we've seen a lot of experience. Like the senior scientist from OpenAI, if I'm not mistaken, her agent decided to remove and delete a lot of her emails without her knowledge or without asking her if it was okay, even after she said it was not okay. And when she found out and ran to the computer saying, hey, you cannot do that, why did you do that? The reply was oops, sorry, my mistake, but the problem was already created. So it's very important to take this into consideration. Users need to be able to undo, edit, reject, inspect, and rerun. Escalate back to human control. Without this, the product will feel fragile and unsafe. Number five, adjustable autonomy. Not every user wants the same level of automation. So some users want suggestions only, some want before action. Some want the system to take care of repetitive things automatically. That is why autonomy should not be treated as fixed. It shouldn't often as a design as adjustable. In sims, suggest, act with approval, act automatically in safe cases. That kind of flexibility makes the system much more usable across maturity levels and risk levels. Let's make this concrete. So imagine you are designing an artificial intelligence feature inside a product like a dashboard or a research repository, CRM, or even a task system. The weak version in this is you add a chatbot sidebar and say, ask me the artificial intelligent anything. That sounds exciting in a demo, right? But in practice, the user now has to know what to ask, explain the context, keep switching attention, and cognitive load will go high and high. Also, it needs to interpret the answers and often even manually do the rest. Now compare that to a more embedded version. The user is already inside the workflow. They select a cluster of notes or view a project. The product offers relevant actions in lines. Summarize findings, identify gaps, see this cluster by team, draft next steps, create tasks, compare changes, flag contradictions, right? They have these options and the artificial intelligence is still there. It's still doing intelligent work, but now it is helping within the task rather than asking the user to detour into a conversation. That is a very different experience, and for many products it is probably the stronger route. If you are a product designer, UX designer, researcher, strategist, or PM even, I really think this is a moment to expand your skill set. Because as AI products or products with artificial intelligence evolve, I do not think the highest value designers will simply be the ones who know how to design better chat interfaces. I think they will be the ones who know how to design human and AI collaboration, workflow real intelligence, trust and control patterns, reversible actions, semantic machine-readable interfaces, dynamic systems instead of static screens. In other words, the designer starts becoming more of a workflow architect and experienced orchestrator. You are not only arranging layouts anymore, you will be shaping when the system acts, how it communicates, what the user approves, where intelligence appears, how confident is signaled, how recovery works, and how much autonomy is appropriate. That is the bigger designer responsibility, but it's also a more exciting one. So if you are listening to this and thinking, okay, this sounds important, but where do I actually begin? Here is a simple way to start. Ask these five questions in an artificial intelligence workflow. Is shed truly the best interface here? Or is it just the fastest one to imagine? What is the real user job to be done? Not the prompts, the actual task. What context already exists in the product that artificial intelligence can use without asking users to repeat it? What can the system safely do on its own and what should require preview and review or approval? How will the user understand, verify, and recover from the system's actions? If teams ask those five questions more often, I think a lot of artificial intelligence products would immediately become better. This is also very important because when should you add agent UX is because not every flow needs an agent, and not every product should become over-automated just because it can. And add agetic patterns when the workflow has repeated steps, the contact is already available, the action reduces genuine friction, the output can review and users benefit from speed and cognitive relief. Do not force agentic patterns when the user needs deep reflection, the stakes are very high and unclear, the data is too weak or unreliable. The outcome is hard to verify, the automation adds anxiety instead of confidence. This is where human-centered design still matters so much. The question is not where can we put AI, the rather question is where does intelligence actually improve the experience without reducing agency or trust? And for anyone who wants to learn this properly, I think one of the best exercises right now is this. Take a Shed First AI concept and redesign it as an embedded energetic experience. For example, a research assist synthesis assistant, a support inbox triage system, a calendar prep assistant, a CRM follow-up flow, a task management prioritization assistant. Then build the case study around these parts. What the original Shed First problem looked like, why shed created friction? What the actual user workflow is, where the system should act, what needs preview or approval, how status is shown, and how one does and how did work and what success looks like. I will share this also below, but this is an incredible strong portfolio story because it shows that you are not just designing AI features, you are designing systems for interaction, trust, and action. But yes, I will leave the this down below for you to do the case study if you want to do it. And if you do, please do share on LinkedIn and tag me so I can also look into it and we can continue this discussion. I also want to end the main argument on a positive note because I do not think that this shift should make designers anxious. I think it should make us curious. This is not the disappearance of design, it is not the reduction of UX, definitely not. It is not a future where everything becomes automated and nobody needs thoughtful product people anymore. Honestly, I think it's quite the opposite. Because as AI becomes more capable, design becomes even more important. Because the real challenge is no longer just whether the model can do something. The challenge is should it do it? When should it do it? How should it show it? How should users stay in control? And how do we make that whole experience feel natural rather than confusing? That is the design work, deep design work. And I think the teams that understand this early will be much better positioned for what's to come next. So no, I do not think the chat box is completely over, but I do think we are moving into a world where it stops being default answers to every AI product question. And that matters. That matters a lot. Because the future of AI interfaces is probably not about making everything conversational, it is about making intelligence, more embedded, more contextual, more trustworthy, and more useful inside the flow of work. Not no UI and not magic, but less unnecessary friction. And for product designers and UX professionals, that means the opportunity in front of us is huge. We need to learn how to design for intent, delegation, visibility, reversibility, autonomy, and human collaboration. Human AI collaboration. Because the next generation of great AI products will not just be responding well. They will act well, and they will make that action feel understandable, safe, and genuinely, genuinely helpful. Thank you for listening to Human X Intelligence. If this episode gave you a new way to think about AI interfaces, share it with someone in the product design or engineering who should be part of this conversation. And if you are exploring these questions too, I would love to hear from you. What do you think is replacing the checkbox in the products we are using today and we are using every day? See you in the next episode.