Human x Intelligent
In a world where technology transforms faster than our environment, we can make sense of it. Human × Intelligent invites you to pause, think and design the future with intention.
We explore the intersection of humanity and intelligence: how leaders, creators and systems can co-create meaningful impact.
Conversations, frameworks and ideas that unite purpose, ethics and innovation.
The future of product is human × intelligent.
Human x Intelligent
Figma → Claude → Figma: The AI workflow product designers should know
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This episode was originally going to be about something else, but a conversation over the weekend reminded me of a workflow I use quite often when developing and designing applications.
So instead, I decided to share one Human × Intelligent workflow I keep coming back to:
Figma → Claude → Figma
Rather than treating AI as a chatbot outside the workflow, this setup connects Claude directly to the design environment using Model Context Protocol (MCP). That means the model can analyze interfaces, reason about product systems and help accelerate design thinking.
In this episode, I talk about:
- What changes when AI connects directly to design tools
- Why context makes AI much more useful for product design
- Real workflows I use: UX audits, design systems extraction, dashboard analysis and component generation
- The difference between Official Figma MCP and Figma Console MCP
- What worked well, what didn’t work so well and what I’m still experimenting with
- Why I think the real shift is AI becoming part of the workspace
This is not about replacing designers, but actually, it’s about building better collaboration between human judgment and intelligent systems.
If you’re exploring AI workflows for product design, design systems or complex SaaS products, this episode should give you a practical mental model for where things are heading.
Example workflows mentioned in the episode
UX audit of a flow: Analyze selected screens for hierarchy, cognitive load, accessibility, spacing consistency, CTA clarity and user flow friction.
Design system extraction: Analyze selected UI and identify typography scale, color tokens, spacing tokens, component patterns and layout grid.
Reusable component generation: Convert layouts into base components, variants and nested structures optimized for scale.
Dashboard refactoring: Audit dashboards for information hierarchy, data density, scanning patterns, visual grouping and progressive disclosure.
Retention system mapping: Map a product UI to triggers, actions, rewards, feedback loops and habit formation patterns.
Setup steps
- Sign up for a Figma Pro seat and Claude Pro or Max
- Install Node
- Install Claude Code
- Create a Figma token
- Enable Figma Dev MCP mode
- Configure the Figma MCP server
- Install Figma Console MCP locally
- Install the design systems MCP assistant
- Install the Desktop Bridge plugin
- Install the Figma MCP server in Claude Desktop
- Restart Claude Desktop
- Run 'check Figma status'
--
Links:
Episode page:
Madalena on LinkedIn: /madalenafigueirasdacosta
Subscribe: https://substack.com/@humanxintelligent
—
🎙️ Human × Intelligent explores how humans and intelligent systems evolve together, across product, behavior and culture.
---
#AIAdoption #EnterpriseAI #HumanInTheLoop #ResponsibleAI #AIGovernance #AIWorkflows #AITrust #AILeadership
🎙️ Human × Intelligent - a podcast about trust, transparency and human agency in AI systems, for product designers, PMs and founders building with AI.
🔔 Subscribe so you don't miss the next episode
🌐 humanxintelligent.com
Hosted by Madalena Costa · Senior product designer and AI systems strategist
Hello everyone and welcome back to Humanex Intelligence. So, this episode was actually supposed to be about something else, but over the weekend I had a conversation that reminded me of a workflow I use quite a lot when I'm developing and designing applications. And I realized I had never properly shared it here. And I thought, actually, this might be useful in some way. Because I've been testing a lot of workflows lately, for a few months now, even. Some of them are actually very useful. And some of some are let's say interesting but still a bit messy. And others really save time. But I wanted to share this one specifically because it's really it's really good for me. It's really interesting. And it's the flow of Figma Cloud and Figma or Figma to Cloud, Cloud to Figma. Which sounds pretty simple, but it represents a much bigger shift in how I think about artificial intelligence and how we can co-create together. Because actually, this is not just about asking AI for design ideas, it's about what happens when artificial intelligence stops living outside the workflow and starts becoming part of the workspace itself. And that to me is very much a human ex-intelligent conversation. So I wanted to bring this to you today. Because actually, I'm never really interested in artificial intelligence just as a novelty. I'm interested in it because of what happens when human judgment and intelligence systems actually work well together. Not when humans disappear or not when machines take over, but when the collaboration gets better because they work together. And this workflow is one of the clearest examples of that, at least in my opinion. So if you don't agree, please share it below. Share it in via messages, DM me, wherever it is. Let's have this conversation because I think it's very important and interesting to have. But the old way of using artificial intelligence for a long time now was that most people have used it AI in a very simple way. Let's say it like this: you open ChatGPT or or even Cloud, you ask a question, you get an answer, you copy it, you paste it into Figma, Notion, Code, Docs, wherever it is, then you adapt it manually. And yes, that can be actually very useful, but it's still a bit fragmented. The artificial intelligence is outside the actual working environment, right? And for product designers right now, that matters a lot. Because as product designers, we know the context changes everything. A design decision depends on screen, flow, hierarchy, spacing, edge cases, constraints, implementation reality, the business tension behind the decision. You know, there's a lot of things behind what we are working on. So if AI is disconnected from the tool, it guesses. And when it guesses, the output is often very generic. But if AI is connected to the tool, it can actually observe. It can look at the interface, inspect the structure, reason with more context, and that changes the quality of the collaboration. This is actually where the shift happens because not it's not like AI can answer this design question. It's more like AI can now work closer to the thing you're actually designing. What I mean by Figma Claude Figma, so when I say it, it means that is a workflow where Claude is connected to Figma through MCP. So it can access the design context more directly. It can inspect what's selected, understand parts of the file, reason over the interface, and in some setups even help with deeper system operations. And that changes what prompting becomes actually, because now you're not a prompting in a vacuum, you're prompting against a real interface. So instead of saying something vague like, can you improve this dashboard? You can say something like analyze this dashboard and suggest improvements for information architecture, data density, scanning patterns, visual grouping, and progressive, excuse me, progressive disclosure. There is a very different kind of interaction. Or instead of saying how should I structure a design system, you can say analyze the selected UI and extract the design system, identify typography scale, the color tokens, spacey token, spacing tokens, component patterns, and layout grids. Kind of like propose a reasonable, a reusable token structure. That's already much more operational, and that's why I think this matters. Because the value is not only in generating things, it's in making AI more useful inside the real workflows. So why I think product designers should care from this from my perspective? I think this matters for quite a few reasons actually. The first is context, because AI is always more useful when it can see the actual thing, not just your summary of the thing. The second is less friction, because if I don't have to keep taking like screenshots, rewriting c rewriting context, explaining what the screens does and correcting assumptions, the work for the workflow actually becomes much faster. The third one is system thinking. And honestly, this is probably the part that most excites me because if you know about this, I really like to think in systems, so it's perfect for me. But it's because product design is just not drawing screens. It's about structure, consistency, flows, component logic, trade-offs, product behavior, scaling patterns, design to code alignment. There's a lot of it, and there's everything is systematic, which is pretty cool. But connected AI gets much more interesting in those layers than the purely visual or decorative ones, right? But yeah, moving on. The fourth reason is that it pushes the design role further into orchestration. What I actually think is like a good thing, because the designer becomes even more responsible for setting direction, framing the problem well, defining constraints, evaluating quality, deciding what should and should not be automated. And that feels very aligned with human X Intelligence, which is what we are sharing here. The human sets the meaning and the standard. The intelligent system expands the capacity. The workflows, I actually it's actually very useful, and I find it useful. So let me make this practical because I think that's the part people care about the most, right? So there are a few workflows here that I generally think it's useful. One of them is automatic UX audits. So this is probably the one that I find it easiest to understand. You select a few screens in Figma, ask Clock to perform a UX audit. For example, analyze the selected screens and perform a UX audit. Evaluate a hierarchy, cognitive load, accessibility, spacing, consistency, CTA clarity, and user flow friction. And what I like about this is not that it replaces the designer, it doesn't. What it does, it gives you a strong first pass. So before I continue, I just want to make sure that you know that everything like these prompts that I'm sharing right now in this section will be below in case you want to try for yourself. But moving on. A second lens is a way to pressure test the flow before you present it. So, or even hand it off, or even take it to testing, for example. But do take it and let me know if it worked for you or not, and if you have anything even more useful that we can share. But number two, turning a layout into design structure. So this one is really strong. I find this one of the most strong ones because you can ask Claude to analyze a selected UI and extract it. So, for example, the typography scale, the color tokens, spacey tokens, component patterns, layout grid, basically what I shared above, you can do it and you can use a prompt for it. Then the proposed reusable token structure. And this moves from what does this screen look like to what system is the screen revealing? That's especially useful for like legacy products, inconsistent libraries, scaling teams, mass interfaces that need structure. You know, the list goes on. But number three, generating reusable components. Yes, you can do that. You can also take the layout and ask Claude to convert it into reusable content components. But to identify this, let's see. You have the base components, uh the variants, nested components, and proposed hierarchy optimized for scalability. Again, I don't think that the goal is blind acceptance. It should never be blind acceptance. The goal here is faster structural thinking. Because most teams are not struggling to draw UI, they are struggling to organize it well. So, number four, refactoring complex dashboards. So this is very relevant to the kind of work that I do actually. If you worked on dashboard, internal tools, analytic products, B2B systems, you know the problem is useful, not make it prettier. It's actually what should stand out first, how much data is too much, what belongs together and what doesn't, where is the overload? Um, what should be visible immediately, what should be progressively disclosed, and this is where it connected. So AI can be generally useful because you can ask it to analyze the dashboard for, like I was saying above, but with a little bit more context, information hierarchy, uh scanning patterns, visual grouping, progressive disclosure, and that's actually practical. But number five, generating UI directly in Figma. This is the flashy one. Like, yes, you can ask it to create a SAS analytic dashboard layout with, for example, a sidebar navigation, search bar, metric card charts, recent activity table, and yes, it can actually be very, very helpful with that. But honestly, this is not the part I'm most excited about. It's useful for sure for exploration or for quick prototyping, MVPing, thinking of ideas, sending and then seeing if it's good or not. But the deeper value for me is still in the analysis structure and system side. Because that's where we can shine even more. But number six, mapping out the UI to product behavior. This one is probably the most me. If you know me and know how I work, you know exactly what I mean. But let me let me share. But you can ask Claude to analyze a product UI and map it to a retention system. You know me? Retention, it's some of the some of my favorite topics to talk about specifically in products, and I do a lot of talks on it because I find it very interesting. To identify this, you can identify, for example, the triggers, actions, rewards, feedback loops, habit formation patterns. And I love this because now AI is not just acting like a designer system, it's acting more like a product thinking partner. So it helps to connect interface decisions to behavioral and product logic. And that is very much where I think the real value starts to show up. Because what worked well for me, a few things worked well. That's true. But what worked well for me might not work well for others. And the first thing that I would say that has to say is that you need to structure prompting, which matters a lot, but you need to structure in a way that is related to your team, your product, your project, whatever you're doing. And my rule of thumb with this is very simple: context, task, constraint, and expected output. Pretty simple. Let me let me share it again. So you start with context, then you give the task, the constraints, and the expected output. And that really changes the quality of the answer so much because instead of saying, can you improve this? you're saying, so the context is that this is a B2B size platform that has an analytics dashboard for operations teams. And the task here is to perform a UX review, and the constraint is that it keeps existing structure, don't redesign from scratch. So the expected outcome of this is return priorities prioritize improvements with rationale. The second thing is that worked very well is using Cloud for code adjacent design work, not just for the visual generation. And I think that's something people underestimate because Cloud is especially strong when it work involves, for example, structure, system reasoning, logic, documentation, tokens, components, design to code bridges. The list goes on and on and on. But I would say that the third thing that worked well is thinking in workflows, not isolated prompts. So this is where I think Cloud skills are really powerful too. Because once you realize you keep doing the same things over and over, like UX audits, design system extraction, component documentation, PRD framing, structure critique, the question becomes why am I writing this from scratch every time? Because you know the logic, you have the templates, and you can review it very quickly because you know what you're reviewing. You know exactly what you should be looking for. And if the artificial intelligence is missing something, you know it, you understand it, because you know what you're talking about. And that's the human magic. So that's actually because a skill, for example, skill is basically basically a reusable workflow intelligence. So it's a way of encoding like context, standard, structure, output format, and even nuances. So the system can do repeatedly work with more consistency. And actually, that to me is one of the most interesting parts of the whole ecosystem. Because it's just it's not just better prompts, it's that it's very human X intelligent. You need to understand, and let's be honest, this is not yet the kind of thing most design teams are going to set up casually and adopt overnight, because there's still things that need to be improved, there's still a space for other things before we get to this. And that matters because no matter how powerful something is, adoption depends on usability. The second is that road stripping is still imperfect. When you move from between code and Figma or generate editable designs, you can still get spacing issues, awkward alignment, uh missing sections, imperfect fidelity. So I would not present this as great now, handoff is solved. There you go. You can you don't need to do anything else. Not at all. It is useful, yes, it's promising, for sure, but it still needs judgment clean-in. And of course, we need to think of Roy, because Roy depends a lot on your workflow maturity. If your process is already structured, AI can add leverage. If your process is still messy, AI often just adds more noise to it. Faster and faster. And the fourth one, I would say that it's most design teams do not want to live in the terminal, which is why I also find it interesting when people start building more designer-friendly layers on top of these workflows. Honestly, I think that's one of the biggest opportunities, bringing this kind of intelligence directly into the canvas in a way that feels natural for designers. One reason that I keep coming back to this example or this workflow is because of the kind of work I do. The challenge for this SaaS dashboard is almost never just visual layer, like I was saying above. The real questions are what should someone notice first, what should feel grouped, where is the attention going? Where is the overload? What needs to be compared? What drives action? What needs progressive disclosure? That's why I find connected AI generally useful. Because I can use it to audit hierarchy, challenging grouping, think about scalability, scannability, extract patterns, propose system structure. And this is not because I want the AI to design the dashboard for me, actually, I prefer it if they didn't. But because I want to move faster through the analysis and refinement loop. And that's the big difference. Also, I want to make one distinct one distinction here because I think it helps clarify the landscape. That's the difference between the official Figma MCP and the Figma console MCP. I don't think the useful question is which one is better. The useful question is what job do you need done? And the official Figma MCP is great for, for example, structure context, quicker setup, code to Figma style, workflows, generating editable outputs, fast iteration. The trade-off is that it's not really focused on deeper design system operations. The Figma console MCP is much more of a systems toolbox. So that's where you get into variables, tokens, component properties, node manipulation, documentation, deeper file control, much more powerful for design system work. Also much more setup. So my mental model is official Figma MCP equals structured context and peak iteration. Figma console MCP equals deeper control and design system operations. And depending on your workflow, you might want one or both. How to know is that oh, do you know if this is good for your workflow? Is that I also don't think, first of all, before we go there, I also don't think that this is for everyone or that everyone needs it. If the workflow is well designed, AI helps. If the workflow is unclear, AI just adds more noise. But the bigger here for Humanex Intelligent Point is in the underneath of what we are talking about is we are slowly moving from AI as a tool, you visit to AI as a system in participants in work. And that is a very different error. So this is a very different error that we are in because the question is no longer just how do I write better prompts. It's becoming how do I design better collaborations between humans, tools, systems, and models. That includes prompts, skills, memory, MCP automations, reusable workflows, system governance, AI ethics, and also honestly, that's why I've been spending time creating automations on my Mac Mini and thinking more seriously about workflows design and as its own layer of work, you know, because I think that this is where the leverage is. Not just faster outputs, better systems for doing better. And this is why I share this podcast. This is why I learn by doing. So I learn a lot, but I also do a lot because I want to make sure that what I bring here, I test it and I try for myself to see if it works or not in real context. But this Figma Claude Figma flow is just one of that. And the reason I wanted to share this now specifically is because that during the weekend conversation, it reminded me that I've been testing this kind of workflows for a while, but I hadn't really talked about here about, for example, what worked for me, what didn't work for me, what changed, what excited me, where I think this is actually useful. And I think that's more honest conversation than either hyping everything up or dismissing it at all. These tools are not magic. They are not supposed to. They're also not useful. They are evolving. And I think the best approach is test carefully, keep what creates real leverage, be honest about what's still clunky, but keep the human standard high. That is human X intelligent mindset I care about. If you've been experienced with any version of this yourself, I genuinely, genuinely love to hear what's been useful for you. Thank you for listening and I'll see you in the next episode.