UX - The User Experience Podcast

UX and AI Digest 4 - AI Interface Design at Hark, Who’s Accountable When AI Fails & ChatGPT Shopping

Jeremy

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:10

I'd love to hear from you. Get in touch!

🎨 Former Apple Designer Building a New AI Interface at Hark — TechCrunch

  • Brett Adcock is betting that hardware design and AI need to evolve together — the way we interact with intelligent software shouldn’t just be a chatbox bolted onto existing devices
  • What resonated: we are still using the same computers and smartphones even as AI transforms what’s possible — the interface layer hasn’t caught up

  • Hark’s position is interesting: they’re explicitly not building wearables, not putting a layer between humanity and the interfaces we use in the world — so what are they building? I’m curious
  • The reminder here for me is simple: even with AI, you start with user needs, then you figure out what to build, then how to design it — the magic of the technology doesn’t change that order
    🔗 https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/

⚠️ When AI Experiences Fail, Who Is Held Accountable? — UX Collective

  • This article opens with a case I find genuinely baffling: a man’s father died, he asked Air Canada’s chatbot about bereavement fares, got wrong information, booked accordingly, and the company’s initial defense was that the chatbot is a separate legal entity responsible for its own actions
  • A tribunal had to formally rule that a company is responsible for its own website — that shouldn’t require a tribunal
  • The core design challenge: LLMs are non-deterministic — the same question gets a different answer every time, and communicating that uncertainty to end users is genuinely hard
  • The chain of accountability is long: designer, product manager, vendor, company — and when something goes wrong, everyone points at everyone else
  • Don Norman’s framing stuck with me — designers are both culpable and structurally constrained, because they’re also inside the system, doing what they’re asked to do
  • Jared Spool goes further: if you create something that can be misused, that’s no better than a doctor not washing their hands — the profession is stuck between those two positions
  • AIGA’s standards of professional practice haven’t been updated since 2010 and contain no language on AI — the legal frameworks are lagging badly behind the technology
  • My take: articles like this one are exactly why research matters more, not less — the more uncertainty the technology introduces, the more you need to understand your users and design for failure states
    🔗 https://uxdesign.cc/when-ai-experiences-fail-who-is-held-accountable-3f07ce9e6032?source=rss––138adf9c44c—4

🛒 ChatGPT Is Now Powering Product Discovery — OpenAI

  • OpenAI announced richer shopping experiences inside ChatGPT — natural language product search, in-chat comparisons, prices, descriptions, and direct purchase flows
  • Having spent time in e-commerce, I find this genuinely disruptive — but I also want to push back on the framing that this replaces all other ways of shopping
  • People shop in lots of ways for lots of reasons: touching a product, comparing in-store, shopping socially with friends, going directly to a brand they already trust — chat doesn’t serve all of those
  • Two questions I don’t have answers to yet: how impartial is the chatbot when it decides which products to surface? And how do sellers optimise for being recommended by AI rather than ranked by Google? (AEO — agent engine optimisation — seems to be the emerging term for this)
  • The accountability point from the second article applies here too: what happens when ChatGPT recommends the wrong product and

Support the show

Help me improve the show HERE

SPEAKER_00

Today we'll cover the former Apple designer building a new AI interface at the Hark. Who should we blame when AI experiences fail? And finally, what is the future of e-commerce with AI? Welcome back. Jeremy here, happy to be back. And today we'll cover three articles at the very least. And if I have some time, I'll cover two more. But these last two, I haven't had time to read them or review them. So if if I have time and I review them, that will be live with you. So let's first cover the idea of the former Apple designer building a new interface at Hark. So this is an article that I found on TechCrunch. And so apparently this is about serial entrepreneur Brett Adcock um sharing some details about what he believes would be the association between hardware design and AI to create new ways in how humans interact with intelligent software, with AI. So I like this article because it offers, let's say, a good perspective on the idea that user experience is first and foremost, well, should be at the center of our consideration when we design anything. And I like the idea that even with AI, people tend to think that, oh, this is AI, so there is no way to consider how to design the experience. Because it will be AI anyways, it will cover everything, it's magical. And it isn't. In the end, you should always consider the users' needs first and foremost, and considering how these should be addressed. And so what can we put in place to address them and then how? And so this is a question that has been asked, uh, that has been covered, let's say, in the article. So we have several points being covered. Um, so for instance, he's saying that what was very clear for me at the time is that the world is clearly changing, but we are using the same devices. And so that's true. We are right now still using the same devices, same computers, the same smartphones. And at some point, I I too believe that things will change in the way we use them. It's right, for instance, they mention um that this is kind of awkward still to have to fill out some forms every day, share information between devices, or go through some mundane tasks to book travels or planning uh some home renovation, for instance, as this is said in the article. So this is the idea, I believe. They are working on something like um to kind of revamp a little bit this interaction that we have with technology. That's my understanding. And at the same time, they're not thinking about apparently some wearables that are in between humans and the final outcome, like Meta's glasses, for instance. So um Charles Dury says, sorry, I'm not the biggest believer in a lot of the wearable AI platforms people are talking about. I don't think it's appropriate to put a layer between humanity and the interfaces we use in the world. So I am curious as to what they are thinking about, but ultimately this is a reminder for us that the what, let's say the technology, is as important as the how, which is a design to, let's say, put that in front of a consumer. So yeah, it's basically an overview, and it's basically just the news that we have this um this Apple designer building AI interfaces at HARC. And ultimately, I think that this goes hand in hand with the next article, which is about what do we do when AI experiences fail. So I think before even covering what's in the article, I really like to think about this type of questions, about what we call in the user experience the unhappy path. For those who aren't familiar, the unhappy path is like if I have a task to do and have a goal in mind, well, if I use something to do my task, usually that will go right, and so I will use this product to accomplish my task. So let's say I use Gmail to send an email, and then that's it. And then I click on the button to send and then it's sent. What could be the unhappy path? In this case, unhappy path could be well, I lose momentarily access to the internet, and when I clicked on send, well, I don't know where my email is. So that's the unhappy path, for instance. Or that could be uh at the payment process while you check out for your um purchase on an e-commerce website. This is an unhappy path. Uh imagine your uh credit card is not accepted for whatever reason. This is also an unhappy path because this is not what is desirable for the user, and we should design for the unhappy path. And so this is really important. This is something to take into account. And so this article that I found covers that idea. So when AI experiences fail, who ultimately should be held accountable? So we can think about all the steps that we have to go through when designing for technology. And this is something side note. I want to reflect on something. And I was thinking, where can you place that in your podcast? Because this is an idea that you have, and I I feel this need to share it with the world, even if I'm wrong, I don't care. My hypothesis is that for whoever is anxious about AI, displacing jobs and so on, it will. Just know that it will, of course. This is radically transforming society in ways we cannot even fathom for now. But I just think there will be transformation in the way we work. That's it. And there are several things that made me think that. The first one is it has always been the case. Let's say we always had work to do as humans, and we will always have. That's my hypothesis. That's a big assumption. I may be wrong. Then we need to define terms. What does work mean, and so on. But ultimately, I do think that we have let's say we have capable humans able to perform some tasks, and we have tasks to be performed. Whatever those tasks are, there might be displays. So, for instance, right now you can accomplish some tasks that you were doing before, like crafting a presentation, but some of the jobs that include these tasks, like I don't know, uh generating the text that goes in the presentation, can be done with AI. So that means there is a bit of shuffling of the tasks. But you still have work to do, you still have to instruct an AI to do it. And so we will have some phenomena of compressions and expansion, meaning that you you compress some of the tasks you were doing before to be done by the AI, and then the rest it expands, so it leaves you some space to do something else. And I do believe it will be this way. So that's the first reason that I'm thinking that. And the second reason is articles like this one, which is when AI experiences fail, who is held accountable? If we take a step back and we try to think about the why of this article, there is a whole history behind. You will see there are some cases in which AI failed and there are some real consequences. So that means that the design of AI experiences is not so so obvious. So that means we need to think about that. We need to think about what should a good design be. How can we how can we account for failures and so on and so forth? So this is something that is emerging. So that is something that needs people to work on that, anyways. So and then we can think about all the ramifications. We can have designers, we can have people who train the models, we can have product managers, but there is work to do. There is work to do. And then you will tell me, well, what if the models are so so so performant and then they can do all of our tasks? I will answer, well, I don't know. Um with the internet, we also thought that this would be the case, maybe not to the same extent. Sorry, I don't mean to compare things that are not comparable, but I'm just thinking out loud. We still had to design and redesign and redesign UIs every time, even though we thought, well, that would that will leave us some space for other tasks, and that would that will free us some workload. But in the end, it was just a rearrangement of our day-to-day and our tasks. Anyway, so that's a side note that I just wanted to share that with the world at some point. My two cents. Maybe one day in 10 years I will I will re-listen to this episode and I will compare with the reality. But I do think that there will always be work to do for the people who are willing to adopt new technologies because this is how the world has always worked. So, now covering this article. So the article covers the idea that first first the article presents all the ways a nai can go wrong. So, for instance, a man's father died, he asked the chatbot what to do next. Um, so for instance, yeah, so the father of someone had just passed away, he needed to book a last-minute flight. He went to Air Canada's website, found the chatbot, and asked about bereavement fares. The bot gave him instructions, he followed them, he booked his tickets, the information was wrong. So, and then when asked, the company's defense was saying that the chatbot is a separate legal entity responsible for his own actions. Really, really interesting. That's baffling to me. But, anyways, um, then the tribunal had to rule formally and legally that a company is responsible for his own website, its own website. And and yeah, the article starts with that, and then sharing the fact that, of course, there is a lot, there is a long chain of events and of actors when creating this kind of technology, like any other technology. The problem with AI, or the challenge, let's say, is that at least LLMs are non-deterministic. It's probabilistic. So that means if you ask the same question twice, you will get a different answer. Well, then you might ask me how different. Well, it depends on the level of resolution that you want to have. But ultimately, the answer will never be 100% the same. Twice. And so it's really, really difficult to communicate the let's say the uncertainty to to the end user. And everyone on the chain also has to manage this uncertainty: the designer, the product manager, the vendor, the company, and so on and so forth. So there are a lot of actors, and we can imagine several scenarios when this goes wrong. We can imagine everyone pointing the finger at everyone else. We can imagine saying that, oh, we had told you, consumer, that we could make mistakes. You used it anyways. So yeah. And then there are several um stories that are shared by the medium article that I will put the link in the description description. So for instance, we have a chatbot replaced human counselors, gave dangerous advice. Um we have a CD chatbot gaving giving sorry illegal advice. We have an algorithm which rejected 1.1 billion job applications. Can we blame people using these kind of tools? I don't know. Who would be to blame? The world is about efficiency, it's doing more every day, productivity, that's how it works, at least right now. I can only imagine a company which has thousands and thousands of applications because ultimately this is a system, this is systems thinking. Like if you introduce something in an ecosystem, in this case, a technology that increases efficiency, wherever you introduce it, at some point you will have to have a balance. So imagine all the applicants use AI to increase efficiency. So you have more applications. You need to filter those applications out. If you use only a human to filter them, I don't know to what extent that's possible to accomplish the same rate. The same rate because you increase volume, but if you want the rate to remain constant, you have to increase your lever. There is no other option. So imagine instead of receiving 200 per day, you receive 1000 per day, and you still want to achieve the same rate. Well, you need to use the same tools as what the applicants are using. So ultimately, people will use AI to filter them as well. But the problem is, as we can see in this article, an algorithm rejected 1.1 billion job applications. Um apparently that was workday. Um apparently that was linked to discrimination, as I'm reading right now in the article. So I don't know the specifics, but I'm just saying that there are risks, of course. And of course, we have the sad history of the teenager who died. So that's it's really, really, really difficult. It's really difficult. There are some legal implications, and I I am not a pro of these topics, of course. I'm just discussing the fact that design is so important, user experience is so so so important, and people tend to forget that. Um so right now, I don't know if there is an answer in the profession. It's saying in the article that the AIGA standard of professional practice um so the AIGA is the professional association for design. They have apparently the standards have not been updated since 2010 and they contain no language addressing AI. But as we know, it's kind of I don't know, it's kind of tricky because when we have technology, oftentimes the technology precedes the legal implications of it that have not yet been defined. And I like this quote in the article from Don Norman saying that designers have always been both culpable and structurally constrained because they are also victims, because they are part of an infrastructure and they do what they are asked to do. Because ultimately, we have product managers asking to make some, well, willing wishing or working to make some products, and ultimately they rely on designers for the design. But even if we don't think in terms of job role, we can think in terms of function. Even if you do your own product at some point, you will be both the product designer and the product manager, but at some point, the design will just serve the user's needs and also the business, the business needs. So you will have a business need, which is to make money, because if not, your business doesn't exist. We can also think that you make money because you serve people. So money is secondary, of course, but at the same time, if you want to continue serving people, you need to be profitable because if not, you cannot serve people. So ultimately, the design is kind of the middle, the middle, that's what the Norman is saying. That's what my understanding is the middle level of the infrastructure. And so, yeah, and we have also another quote from Jared Spool saying if we create something where it could be misused that is no better than doctor not washing their hands and infecting a patient when it could have been prevented. Both are right. That's the article saying both are right, and the distance between those two positions is exactly where the profession is stuck. So that's really, really fascinating. Um I'm I don't have a lot of words on that topic because um I can just say at my level as a user experience researcher, I can just re-re-emphasize again and again and again on conducting research as much as possible, studying your users' needs, how do they use technology, what do they expect of it. The more you acquire data from your users, the easier or the less complicated complicated it is to address them, let's say. Then you will have other other difficulties like the technology, the tools you have, the legal implications, and so on and so forth, of course. And this is a collaborative work, and it's not because AI arrives that all of these questions uh go away. On the contrary, as we can see, like the implications are huge. But these are kind of articles really re-emphasize for me the need for research. And you can say, yeah, research is dead because AI know um some aspects will be done differently because of AI. But as we can see, it's kind of a rush and all problem. Because if you use AI to do research to improve the feature that are AI-based for your users, you introduce so much more uncertainty in the process. So even researchers who are users of AI need to be careful about the implications of the uncertainty of AI. At least right now, I don't know to what extent we will solve this probabilistic aspect. Anyways, um, I really like this kind of article because that challenges our way of thinking. And I really encourage you to read it. Then, what do we have? We have the ChatGPT announcement that they will um power e-commerce in their platform. I'm kind of surprised because I thought that was already the case, to be honest. I thought we could already purchase some products in in um in Chat GPT. But yeah, they apparently shared an article from March 2026 and sorry, March 24th. And so um, yeah, so they will want to power richer, richer shopping experience. So, for instance, you can talk in natural language, and you can also um compare products, you can have directly the products in the chat with the price and description and so on. And that is fascinating to me. I worked on e-commerce for one year-ish, and that was interesting because it's really fast-paced, even if you work at a corporate, it can be fast faster paced than other kinds of corporate because it's directly tied to um well to to revenue, so well, most direct more directly than other kinds of um let's say steps in the In the process. Anyways, so that is fascinating because it really disrupts the way we purchase products. But ultimately, I would also challenge this view because yeah, you can do the search directly in the chat and you can compare products in the chat. Again, again, I would say, does it fit all personas? I don't know. I'm not sure. Right now we have many, many, many ways to purchase products. You can purchase online by visiting all the websites. You can do it through a chat. You can ask someone to go to a to a you can pay someone to go do shopping for you. You can do shopping for you for yourself physically. All of that answers different needs. You might want to touch the product first. You might want to compare physically. You might want to integrate that experience while you're um having a good time with friends, do some shopping together. I don't know. Um it's good. It's good to have these options. So I would just say that this is one of the ways we purchase things. Um I'm not sure if it fits all needs. And sometimes, marketing-wise, I think it's presented as the only new way that will that will substitute all other ways that that do not fit anymore. I don't know. That's my feeling. I think that's one more way. So but for sure, e-commerce will have to adapt. E-commerce will definitely have to adapt because right now they have to. And that's the that's the fascinating aspect about technology when it improves. It affects so many fa so many actors. You have the consumers here being affected, and that's a good thing for consumers. And even about them, I would say I would challenge that because that introduces new challenges. For instance, what do you do when your channel is asking about products in natural language? I don't know. Do you want do you really want to purchase a product through chat and reiteration? And when your chatbot doesn't understand you and need to refine, whereas when you know the brand, you directly go there on the website. I don't know, I'm just wondering. Um how do you manage the filtering of the brands and the products that the chatbot gives you? Are you really sure of how impartial it is? I don't know. Um, I don't know. Like, are you really sure that the chatbot is unbiased? I'm just wondering. So that's for the consumer aspect, but then we can imagine that this also impacts the sellers, of course. Um so you need to you need to work on your ability to be showcased by the chatbot, for instance. So in the same way we had SEO for Google, now I think it's called AEO. I'm not sure. Sorry if this is wrong, but it's kind of the parallel of SEO, but for agents and for um AI. So it's kind of the way optimizing how your product is showcased and the probability of being showcased by a client like ChatGPT on their um on their website. So that's also a challenge. How do you optimize for that? How do you present in the space that you have because it's kind of standardized the way you output your information? So yeah, a lot of challenges, a lot of a lot of possibilities. And this also goes, how can I say this also is complementary to the other to the second article that I reviewed, which is uh what do you do when it goes wrong? Imagine it presents to you the wrong article, you purchase, and then the purchase goes wrong, whatever. I didn't think of all the possibilities, but we can think about that. What do you do when this goes wrong? And then what is the future? So, the parallel with the first article, what is the what is the future of purchasing if we don't think about a smartphone at all? Do we do it through voice? By the way, that's a side note. I really want to give another two cents today. I don't think, I don't believe in a future in which we don't have a visual modality to interact with technology. That's my two cents. I may be wrong, but come on. Um the visual information in our brain, the visual uh modality takes up maybe 60 to 70 percent of our perception. I don't know if I'm mistaken. I I think this is a broad figure from my degree in neuroscience. I remember we were told, of course, we have a lot of the perception that is drawn from visual information. If you take that away, it's like if you impaired all humans of this great sensory information that we have, which is visual. So just wondering. I do believe that to make great technology, we need to, again, study the users' needs, thinking about the user perception, the psychology of users, and so on and so forth, and adapt to it, adapt the technology to it so that it's as easy to use as possible. This is something that is covered by the first article, by the way. And so, if majority of our perception is visual, I cannot imagine an interaction with technology that will not be visual. I'm just saying that because a lot of people are thinking now that we have now that we have um voice and AI processing and so on, maybe interaction in uh visually with smartphone will not be necessary. Maybe not smartphone, maybe it will not be necessary in the future, interacting with smartphones, but I do believe that being able to control your interface, interacting with it visually is really, really, really important. Um I don't know, that's just two cents. I may be wrong. Let's see, in 50 years, maybe maybe in 100 years, I would be really, really curious. Anyways, that's it for today's episode. Thank you for tuning in. I'm sorry by the way for the audio. I I think it was a bit terrible today, and maybe these two days I'm away from home, though, so that's why. Um, and so thank you for listening and see you in the next episode. Bye bye.