UXchange
Welcome to "UXChange" the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will:- Share learning experiences from myself and UX professionals- Answer most common questions- Read famous blogs- Interview UX Professionals- And much more!For more info, head over to ux-change.com
UXchange
UX and AI Digest Episode 1
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
UX Digest | EP. 1 — Google Stitch, Microsoft's AI Rollback & the Rise of AI Note-Takers
In this episode of UX Digest, I react to three stories at the intersection of AI, product design, and user experience research — sharing my perspective as a UX researcher working daily with AI-driven tools.
🎨 Google Stitch — "Vibe Design" Goes Mainstream
Google's AI-powered UI design tool is making waves, but is it really a Figma killer? UX design cannot be reduced to the tool you use — Stitch handles maybe 5–10% of what a real UX designer does. Useful? Yes. Disruptive? Not sure.
🪟 Microsoft Dials Back Copilot — The "Less Is More" Lesson
Microsoft rolled back AI integrations across Windows 11 — photos, widgets, notepad, and more — following consumer pushback against what's being called "AI bloat." I reflect on why this keeps happening: technology gets deployed before user needs are validated. A reminder that AI doesn't automatically create value just because it's everywhere.
🎙️ AI Note-Taking Devices — Convenience vs. Privacy
From credit-card-sized recorders to AI pendants and earbuds, the market for ambient transcription hardware is growing fast — alongside software tools like Fireflies, Granola, and Notion AI. I raise a question worth sitting with: why does recording someone feel more acceptable in 2026 than it did a decade ago? And what are we losing cognitively when we stop taking notes by hand?
In today's episode, we will cover the new vibe design tool from Google, the fact that Microsoft rolled back some of its copilot AI bloats and some of some new AI note-taking devices, among others, these are some of the selected topics for today user experience digest. I'm trying to read you some news out loud when it comes to the field of user experience. Sometimes it will be linked to user experience research, sometimes to product design or product management. But the goal is to focus on the intersect between user experience and AI. See how AI affects changes or practice and impacts user experience overall. And so I'm going to select a few, let's say, news, and then I'm going to comment them to the best of my ability. I hope you like this format. I'm going to try this for some time and then I'm going to see if this answers some of the needs of my audience. And if not, maybe I'll switch to something new. Happy to adapt any way. Okay, so let's start. In today's news, I'm gonna cover something that you may have seen pretty recently. Um, some of the uh let's say now more and more widespread way to do design and some new ways for people who are maybe not so familiar with the design aspect. And um and and and here I'm being very careful. I'm not a designer, I'm a user experience researcher, but I know a little bit about the design practices, the software that designers use, and so on and so forth. And so I know how AI can also affect this area. And what I'm referring to is Google, Google having um, let's say, made maybe more widespread or known to the best of my ability, to the best of my knowledge, uh, their tool, which is a vibe design tool, which is called Google Stitch. I don't know if you're familiar with that. If you're not, I encourage you to try it. And so they call it on their blog the partner in vibe design. Um, so we have Wall Street thinking that Stitch is a job killer, um, but of course, designers are not going anywhere. We know about that, and it's uh usually the case when it comes to AI. Um, I'd say that AI is really great at automating slash, maybe not automating, maybe streamlining some of the of the very manual tasks. But when you try to represent the tasks that someone do, someone does, sorry, in their job, you need to you need to have in mind that not everything is so manual. And so um what we refer to as a user experience designer, they may move some pixels around on their screen, but this is not what they do all day. And so that's the thing. Uh, there is a conflation in in in these days between what some of the tasks that we are doing that are manual could be um affected by AI because AI is gonna substitute that, and so people are scared, thinking that it will take their their whole job. If your whole job is just moving pixels uh around, probably, but I doubt it. I doubt it that um someone has to do that. Um maybe in some situations, and in if this is happening to you, I would suggest you to try to learn new things and to try to let's say expand your scope and your horizon, anyways. Just coming back to Stitch, I have used it personally, I'm not a designer. Again, I have to say this is really really helpful for me in my specific circumstance, and again, I'm not a designer, so what I did with this is worth what it's worth. But basically, for those who don't know, it's um Google Stitch. I would say it's a it's a program that allows you to prompt whatever you want to have designed as a UI when you create some um some screens, an app or a website. You just prompt it and then you will get the result um right away. So, and you have multiple uh ways to do it. You have um you can you can um ask it via a prompt, you can attach a screenshot, a sketch, visual inspiration, you can also add uh a design system, you can also prompt it with voice. I think this is the latest addition, by the way, from Google. It's you you can do it by voice. So you will have your interface, your UI created in a matter of maybe seconds. And you can you can do a lot of stuff with that. So, of course, um you can once you do that, um iterate, ask it to iterate according to whatever thing you need to modify. Um you can export what you just created in several ways to code, to Figma layers, uh, to a prototype, and so on and so forth. Many things can be done. Many, many things can be done. Um at the at this very moment when I'm when I'm talking about this, I'm also um looking at my interface in Google Stitch of some personal projects that I did to try to describe to you what you can do. You can also generate multiple variations of your design. So this is good when you want to uh make a decision between several things. Um, and then you can uh create some prototypes out of it. You can copy a UI by just pasting a screenshot and say, please prototype me this. So this is really effective, of course. And again, I will comment on what I know, but there is all the things I don't know because I'm not a designer, and there are all the things I don't know that I don't know. So I'm sorry in advance for all the designers listening to me. Uh, please correct me in the comments. Please uh reach out to me if you want to add something, if you want also to um um appear in this podcast and bring your experience, that would be really, really uh helpful to me. Um I all I want is to learn and to spark some conversation. But, anyways, as a researcher, what I can share is that um, as always, not only in research but it in everything AI, there is a lot of noise. That's the first thing. I see a lot of noise around. As soon as there is something new that is potentially automatable or streamlinable, everyone talks about it, and everyone sees that as let's say the only way forward. And so, and so that's one thing, and that might be true to some extent. So, it's the same with research. I would say that research could be potentially more impacted by by um LLMs than design, because ultimately, with design, you need to you need to um you're prompt it with some language, but then something needs to be created out of that. So there are many, many, many decisions that are taken on top of that. You need to take the decision of the color, of the design system, of the font, of the uh structure and you know information architecture of stuff. There are many, many more um, let's say, decisions to be taken. Whereas with research, I'm not saying that everything is automatable in research, certainly not. But I'm just saying that most often what we use are words. Uh, we create a research plan with words, we create a script with words, we interview people and we listen to them and we transcribe things, these are words, we analyze it and we provide an analysis, it's words, and so on and so forth. Not everything is to be reduced to words, of course. It's not what I'm saying. I'm just saying that it might be somewhat more, somewhat easier to um incorporate LLMs into your workflows if you are a researcher than if you are a designer. That's my two cents, and again, this might shift and I might be totally wrong in some months, anyways. So, what I'm seeing from the standpoint of a researcher is I have always wanted to do some personal projects and also sometimes some freelance projects uh as a designer, but I didn't always have, let's say, all the time to learn all the design tools and not only design tools, because a tool is just a tool, but a design theory. And by that, um referring to let's say what choice of colors should I make, what choice of font, what choice of let's say um structure, orientation, um, and not only that, the flow, what does it make sense to put here versus there, you know? And so all of that. I took some courses uh user experience-wise, but for instance, UI, I know almost nothing UI-wise, uh, like anything that maybe an expert graphic designer could tell me. I I don't I know nothing about this. And so as a user experience researcher, I would say that it is a great help to me to try to uh let's say have a starting point and visualize something maybe more easily than if I had to hire a designer to do that for me. Because some of the decisions that a designer would take, maybe this tool can take. Some of them, I'm not saying all of them, but some of them. And so that provides to me something that is really cool when it comes to having a prototype. Let's say I want to study a behavior from someone and I want to test if XYZ uh tool would help someone in achieving XYZ job. Well, I can prototype it with Google Stitch right now. I don't have to ask a designer to do it, and again, for what it's worth, um, it will not be as good as what a designer would provide me, but probably I don't need to have, let's say, 80% good for the job that I have to do. Maybe I just need 20-30% good to showcase an idea, and maybe that is helpful enough to remove all the blockers that I would have when it comes to um design. That's my perception of someone who is from outside of the discipline. And again, these tools are really great when it comes to um lowering the barriers to entry for someone who is not from a given discipline. So, for instance, if I'm not a financial planner, I could potentially use AI to help me lower the barriers to entry to being a financial planner. That's an example. Um, but I'm not saying that it makes you an expert because it doesn't it doesn't uh let's say abstain you from having to learn the things uh because when you use the tools, for instance, when I use Google Stitch, I need to know what I have to prompt it, I need to know what I need to ask. And so there are all the things that I don't know that I don't know, which could help me. And so that is only a designer who would know that. I could tell it, look, I want to make, I want you to make me a fitness app. It will do something because it was trained on millions and millions of uh let's say files and data from people, and so it will do something that is average, but it will maybe not do what I have in mind. So I need to describe this fitness app. But then how do I describe it? This is the job of a UX designer. Me as a researcher, I can only prompt it with the ideas that I have, but I will not be an expert at prompting it because I'm not an expert at being a UX designer. And so that's the thing. What does Stitch do? What does it do? I would say that Stitch helps in, let's say, some percentage of the job, which is probably, I don't know, maybe 10% of what a UX designer would do, maybe 5%, maybe less. These are all the things that you would think, oh no, I don't want to do that. It's so manual, it's so repetitive, and so on. That's what Stitch is for. That's what Stitch is for, and that's what LLMs are for. It's to help you do the stuff that you do not want to do so that you can do something else. That's it. Um, it doesn't take away the whole thinking that a UX designer should have, of course. So that's my two thent, my two cents about all this. When I heard the whole thing about Figma um, let's say being potentially destroyed by by Stitch, like this makes me laugh, to be honest. Uh, because I don't know, are we comparing apples to apples? I'm not sure. Like, and even when we think about the US, like this is so so funny because we have former UX designers or product designers or even someone who some people who are still in the field commenting that Figma will be let's say um let's say um taken over by Google Stitch. So if you're a good UX designer or product designer, you know that you cannot reduce UX design to the tool you use. Like it's not that. And on top of that, I have used both. Again, I'm not a UX designer, but I have used both, and they're not the same tool, they don't answer the same needs. One is like you prompt it and you get an answer quickly, and it's an average answer, and you need to prompt it again to have iterations. That's it. If I'm not mistaken, with stitch, you cannot move things around, and sometimes for some purposes, you need to move things around manually because it would take you less time to move things around than to ask it again and again with prompting all over again. So, anyways, that's my two cents on this news uh regarding um Google Stitch. Then we have something else about Microsoft. So I'm reading this out loud, uh, let's say um briefly. So Microsoft announced on Friday a series of changes focused on improving the quality of Windows 11, dialing back the number of entry points to its assistant co-pilot, AI assistant co-pilot. So apparently it looks like um Windows was trying to, let's say, um shove down your throats the copilot AI integrations uh with photos, widgets, notepads, and so on. So um, so integrating AI where it's most meaningful, apparently, that's what the EVP of Windows and Devices said. They wanted to do that. Um they think that the goal is to focus on AI experiences that are genuinely useful. So apparently that's why they roll back, they roll back all of these integrations. So they have now a less is more approach to integrating AI into existing platforms because they had apparently a growing consumer pushback against AI, I'm against AI bloat. I'm a Mac user, so I don't know specifically about Windows when it comes to AI being uh implemented or added. Let me know your thoughts. Um so yeah, well, I haven't read the whole article, but I might say one thing. We should be, of course, how can I say we should try to be tolerant towards these companies trying to push AI everywhere because they're trying to do some experiments. Um I'm not saying that we should accept everything they do. I think if we should give feedback and we should not use a feature, we should communicate that just because it's AI, right? Um so I'd say yeah we are entering very very funny times. There are huge amounts of investments when it comes to AI, and so companies have to somewhat somewhat let's say um showcase it everywhere. And so I would say that this is not surprising to me. It's not surprising. We put the we put the technology before the user needs, and that's why I entered the field of user experience research and not and not potentially um investor or whatever, I don't know what to say. It's like again, we think that AI will solve all people's problems, it's not the case. Like this is this is a proof you had a pushback from consumers because they don't want AI everywhere, and because they want to do some things manually, so let people do things the way they want to do the way they want to do it. It doesn't mean that you cannot be disruptive because sometimes you you have to be disruptive and change the way um let's say you have you have to offer things that people do not imagine can exist so that they can change their mental models. This is the case with the iPhone when the iPhone appears. Before we had an MP3 in some places, and we had a phone, and we had uh internet access in three different places, and then in Steve Jobs, let's say um keynote, he said that like the revolution was that everything was going to be the same place, and we had no mental model for that before, and then that's how the smartphone started to exist, right? And so I'd say we have to be tolerant towards this or trying to be understanding towards these companies because they try well, they try as much as they can, but at the same time, we have to remind ourselves of our of our needs and our rights to have a technology suit our let's say our needs, nothing else. I don't want to be forced to use something that is not meeting my needs. That's it. Just because something has 10% usefulness, I don't have to be forced to use the 90% useless features of this product just because. So that's my understanding of this, that's my reading of this news. Uh, and then we have a third one which is about AI note-taking devices that can help you record and transcribe your meeting. So apparently, we have a product called PlotNotes and PlotNote Pro. This is a credit card size note-taker which has been around since 2023 with a newer AI-powered pro version that has a small screen, 4 mics, and records audio within 3 to 5 meters. We have the Mobvoitic Note, apparently, I'm not affiliated with any of those brands, I'm just reading an article. We had the we have the Comulitech Note Pro. We have the Plot Note Pin and so on and so forth. Anyways, we have even some pendants. Wow, I'm discovering that right now. It's an OMI pendant, it's a cheaper alternative to other note takers at$89. This is because the pendant has to be connected to your phone and doesn't have any onboard memory. So these are basically um products that can record the conversations and transcribe them. Oh, we even have earbuds which allow for transcription during calls. So, okay, okay. I haven't read all the implications of that, but just reacting like quickly and in the heat of the moment, I'm so surprised. I'm so surprised. We are living very, very strange, strange times. Um for what it's worth, I'm about to share some of my opinion, which may be controversial and unpopular. I think I remember some time ago when we have when we had the first smartphones, maybe even on BlackBerries, I'm not sure to be honest. So correct me if this is wrong. But I think I remember some phones maybe, maybe 15 years ago, maybe 10 years ago, I don't remember, which had the ability to record phone calls. And I remember that this was disabled at some point because like it was deemed to be illegal and it was um let's say uh infringing privacy. So I'm so I'm so I'm I'm so surprised. I'm so surprised. Like why now in 2026 recording someone during a call and transcribing them is more tolerated than some years ago when it was not. So right now let me tell you, we we we don't only have devices, external devices recording in memory and transcribing and analyzing. We have also software like readai, fireflies.ai, granola, which can record and transcribe online meetings. We even have Notion AI. And these things are so I'm not against the technology. I think this is a great technology. I think it's good if your uh meeting attendees know that you are recording them, but this is a double-edged sword, let me tell you. I think by having these tools so so so widespread, I think it will have the same effect. I may be wrong, but I think it will have the same effect as when you say to someone you don't have social media or you don't have a smartphone, you will be seen as an alien as an alien. So if you say to someone in 10 years that you don't want to be recorded, or that you um don't use any recording device or transcribing device because you think, and actually, I do think that that sometimes in some meetings, it's actually good to think that I have no one to record this and I have no transcribing um I have no way to transcribe. I have to take notes with my hands. I would challenge everyone to think of the difference that it does to your brain taking notes by hand versus having someone taking notes for you and synthesizing everything. I can I would, I would, I would be able to bet on that that it has a difference in your ability to gather your thoughts, summarize what you heard, and so on, and be productive. And so I'm so surprised, I'm really so surprised that this is becoming so popular and that we are accepting that in our day-to-day without even asking. I have joined so many meetings in which when I jump in the meeting, like by default, we have an AI companion, or we have Fireflies, or we have Granola, or we have Notion AI recording your voice, transcribing and so on. And then the thing behind that is that we need to be careful that these data are going to some servers. Like, if you do not self-host the technology, your voice is recording and is going somewhere. Like, I don't care about my voice, to be honest, by now, because if you can all listen to my podcast, it means this is on some servers. But I'm just wondering, like, like philosophically speaking, what makes 2026 different to let's say 10 years ago, in which we said, no, no, no, it's forbidden to record someone and to transcribe what they're saying, versus now when we embed these technologies in some uh let's say products or even on software, and it's becoming so so so widespread. I'm really surprised. Why is it so different? And we cannot, you cannot tell me that it's because we didn't have the technology. We always have the technology to record, well, always, not always, but for a long time we had technology to record. We couldn't analyze that to through AI. So I'm just wondering why did the emergence of AI also let's say, let's say, enabled through that the emergence of recording and transcribing? Like, really, I'm so surprised. And then I leave that as an open question. And I leave that as an open question to my audience. Like, do we think that recording someone in 2026 is let's say um less debatable than 10 years ago? That's it. Um that's my only question. And so that's it for today. And so if you if you saw this episode is a bit different, I'm reacting to some news that I'm seeing every day about the intersect between AI and and product design and user experience research and so on. If you like this format, please consider subscribing, consider activing the notification bell so that you can hear the next episodes. And I hope to see you in the next one. Until then, take care. Bye bye.