The Product Experience

How to launch AI-powered products with Todd Olson (CEO of Pendo)

Mind the Product

Join us on this week's podcast episode with Todd Olson, CEO and Co-Founder of Pendo.
We venture into the world of AI, software, and how launching AI-powered products can generate real outcomes for your business and customers. 

Featured Links: Follow Todd on LinkedIn | Pendo | Register for AI Academy Lisbon 2024 

Our Hosts
Lily Smith
enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She’s currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She’s worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath.

Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury’s. He participated in Silicon Valley Product Group’s Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He’s the author of What Do We Do Now? A Product Manager’s Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon’s music stores in the US & UK.

Speaker 1:

Hey Lily, it's time for us to do another episode on AI and product.

Speaker 2:

I like these episodes, randy. I mean, when's the last time that something really changed the way that we do our jobs on a regular basis? It's exciting, but promise me we're not going to do the thing that all the other podcasts have done when they discuss AI.

Speaker 1:

You mean ask one of the AIs to write our intro and then joke about how we're either out of a job or make fun of it for how wrong it gets the tone.

Speaker 2:

Yeah, exactly that. I still think I can spot one of your jokes a mile away.

Speaker 1:

I think I'm flattered by that. I'm going to choose to be flattered by that anyway. So who are we chatting with this week?

Speaker 2:

Well, there are two types of people that are good to talk to you about this topic People whose companies are doing things with AI, and people whose companies serve others who are doing things with it. So we've got someone who does both.

Speaker 1:

Yeah, pendo's been doing some good stuff in this space, but their CEO, todd Olson, also spends a lot of time talking to others about what they're doing and what they need. So we asked Todd to come on and share some of what he's learned, and we'll do it. Right after this music, the product experience is brought to you by Mind the Product. Every week on the podcast, we talk to the best product people from around the globe Visit.

Speaker 2:

MindtheProductcom to catch up on past episodes and discover loads of free resources to help you with your product practice. You can also find more information about Mind the Product's conferences and their great training opportunities happening around the world and online.

Speaker 1:

Create a free account on the website for a fully personalized experience and to get access to the full library of awesome content and the weekly curated newsletter Mind. The Product also offers free product tank meetups in more than 200 cities. There's probably one near you, Todd. Thank you so much for joining us again. It's been a while since we had a chance to talk. How are you doing?

Speaker 3:

I'm doing great. It's great to be here. Thanks for having me.

Speaker 1:

So for anyone who doesn't know, can you just do a quick introduction who are you and tell us just the two-minute version of the Pendo story.

Speaker 3:

Yeah, absolutely so. I am Todd Olson, ceo and co-founder of Pendo. Pendo is roughly 10 years old now. We hit our 10th birthday a few months ago. Our mission is to help improve size experiences with software. So all of us use software all day long and we want to make sure that software is actually delivering value to users and helping them get them to. The software does what the users want them to do. So we have a broad platform, a variety of capabilities that really empower product teams to help make sure that software delivers on that value.

Speaker 1:

Great. So we're here today because you did a talk at Web Summit a few months back, talking about some of your experiences and some of Pendo's experiences with AI and some of the things that you've learned from it, and we wanted to dig into that and I think what I really liked about it was well, I'll flatter myself. I wrote a book a couple of years ago called what Do we Do Now, and that was pretty much the theme of your talk. It was going back to basics. It's what do we do so? You know, throwing away the roadmap, re-prioritizing, going back to basics. What does this look like to you? How do you even start with AI?

Speaker 3:

Yeah so there's, a variety of pieces that that we did and and I'll go back even a little bit farther so so we actually had hired and started sort of a machine learning effort a year and a half, two years ago and we had a team in Israel working and they had a handful of projects and we're delivering value. But, of course, when chat GPT hit the market, it like totally Shifted our thinking and, frankly, the state-of-the-art technology so like what we had now available at our fingertips was just fundamentally different than before. So I think at that point, our Engineers, our people, us, all of us were playing around with this technology probably like all everyone listening was and like, wow, it can do this and it can do that. And I Think at that point, what we started seeing is a lot of people just Using AI randomly, like, oh, it could do this or it could do that. And I started seeing Little features pop out and I call them like AI, for AI say sick, so go, we can drop this in an LLM and get a summary out of it. I was like, well, our users need a summary of these things. Oh, we hadn't thought about that, but it's really really cool, isn't it Todd? So I think I think you know one of the things I realized was that if we don't Sort of organize it and create a strategy in a vision around how AI can help improve our product experience, improve our customers, we're gonna get technology for technology sake, just like lots of other trends We've seen in the past.

Speaker 3:

And an AI is is no different from any amazing technology transform that we had, like you remember the early days of mobile, the number of things that did not belong as a mobile app, that we had it like all Download, and then a bunch of them went away because it's like, okay, we're never gonna do that on our phones. So it's kind of ridiculous, but it was cool, right? I mean, to this day, pendo still does not have a mobile app because, be honest, like most people are using us in front of a computer. So, but we got by the way, we had one for a very short period of time Because it was cool and we can build it so. So AI is really no different.

Speaker 3:

So so I think step one was getting like our leadership team and sort of a brainstorming room. We did an off-site flick four and a half hours and we're like what are all the possibilities and what are our customers really want from us, from Pendo? Like, did you gotta think like other vendors will do other things and we should let them do those things? I mean, they may not exist yet, but assuming they're having their own off-sites we don't want to, you know, we don't want to go in that direction, but what are things that we, and only we, have? And we kind of went back to some of the basics and Focus on things like jobs to be done and like what? What job are we actually automating? You know, if you know, the whole concept of generative AI is automation. What are we trying to automate? Like what are the really painful things that we think that this technology can automate? How painful are those things? Is it valuable to the user? So I think that's kind of where we started. We started with that vision.

Speaker 3:

The second major thing that we did is we wanted to sort of like okay, now that we have a vision, we wanted to unlock creativity for our engineering team.

Speaker 3:

The reality is, if you're like me and like our head of product and like three other executives, come up with like an AI strategy for this brand new technology, it's just not gonna be as good as if you don't enlist all the support of, like all the really smart people here at Pendo.

Speaker 3:

So we did a hackathon and hackathons are primarily product engineering events at Pendo but but it does often bleed into other organizations. Really, we let anyone who wants to join it can just join it, and there's like lots of swag and we decided that we didn't want to force people to do AI projects. That didn't feel like aligned with our culture and what the hackathon was about. However, we do give out cash prizes for the winners and we said we were gonna give out cash prizes only for AI projects. I Will say someone did something non AI that was amazingly cool. We still paid the cash price for that because it was just too cool to not pay, but um, but yeah, and then there, after that we had 38 examples of how our products can use AI that aligned with our vision.

Speaker 3:

So we kicked off the hackathon and we shared the vision. We said, hey, please, please, please, innovate in these areas. You don't have to like, do whatever you want. But we got 38 maybe not all over aligned with our vision, but a bunch were, and. And then we started to find ones that we felt, you know, really aligned with it. Well, and we started practicing that. That's kind of like how we started last year and sort of where we are Today and and this year, just for context, who are working on a huge AI project that will launch later this year that we came up with a fully a year and a half ago no, sorry, not half, just a year ago roughly and now we're starting to build it, and that that Super interesting. It's a kind of a combination of LMS and then our own internal models, so it's highly sophisticated stuff and it's interesting when I hear you talk about it.

Speaker 2:

So it sounds like for you AI. Now you know we talk about AI as being you mentioned like machine learning early on, but when you talk about the, the vision for AI, you talking specifically about a use of chat, gpt or like large language models, or is it still just anything.

Speaker 3:

Yeah, and, and you know, it seemed like I don't know when Apple released their like vision pro they they use the term machine learning instead of AI. They wanted to do it because they're just their Apple and they can kind of do whatever they want, so they decided to use their own term, I. I think People do associate specifically generative AI with GPT or chat, gpt like applications or LLM specifically, and and no, when we think about our strategy, it is broader than that and it's a hybrid approach, you know?

Speaker 3:

Um, yes, we leverage LLMs because they're amazing at certain things, and that was one of the first areas that we started to innovate within our products. I think the nice thing about LLMs, too, is that they're widely available APIs. Just call an API, you get back an answer. You can integrate it, so that what that meant for us is I didn't have it need me to have like a dedicated team of AI people. I can tell every AI, every, every team at Pendo who's working on any feature, hey, if you have an application for LLMs, just try them, use them. And that's where a lot of the early innovation happened, because it was kind of easy. Now, look, there's cost implications, privacy implications, legal implications, like it is Shipping as pieces software that buys. Well, it's way harder than playing around with GPT in your spare time. So I will say that, like anything we build, we had to sort of, you know, think about, like, actually, the cost is a really fascinating thing too that, that and that's anyone doing cloud computing has to think about that.

Speaker 3:

But then we, within our AI work, we have our own internal models and we leverage those models to do generative applications. You know, in terms of like, generating a set of insights that people should be looking at and sort of triaging. Or we're working on a capability now it's gonna basically auto generate a PLG campaign. That's right, that's our big release next year and that's, you know, it's a combo of the LLMs. But I think that that actually the secret sauce, is really our internal models, which is really a tiny model.

Speaker 3:

So, and I think I think tiny models I mean set lots of tiny models that are purpose built for specific applications Will ultimately be, I think, having a huge impact on the industry. Because if you think about, um, what that means is a tiny model for like is we take all of the usage data we have, so all the product data we have, we shove it into a model and we use it to say, okay, if you know, if I do these things and I generate this, what is the predicted outcome from it? And we, so we're leveraging our unique data set to do generative things and and to me that's super exciting because it's very different to anyone who use an LLM in their product. Um, but no one has our data. So if we find ways the leverage that data to to drive real value, solve real pain, you know so a core job to be done for customers like it's super unique.

Speaker 1:

Let's let's dig into that a little bit more the the practical side of developing with, with this technology, because I've seen so many times in my career people, you know, something new comes out and people think it's like sprinkling magic, pixie dust. What's they start with the technology rather than the business case or what is the thing that's going to make them special. So I'm curious, you know I, how do you go about this? How do you determine where this is potentially going to add value and what you want to do with it?

Speaker 3:

Yeah, yeah, and isn't it funny that all the AI features literally have magic wands and pixie dust as the icon? So, like, it's even like heightened in this, this new world, because, like, literally, that is now the visual identifier for AI. It's like a magic wand with sparkles to come out of it, which is cool looking and I makes me want to hit it. But, but, yeah, it also could lead to things, honestly, some useless product design. So, yeah, so yeah. I often start with the jobs to be done that we're solving, like what is the pain that we're Automating or resolving from customers. And I go back to our core ICP and personas and our, you know, corp person as a product manager. And what are LMS good at? Processing qualitative data, specifically, english language data is what it's particularly good at. So it's like, okay, what part of a product manager's job involves handling significant amounts of qualitative and or Textual data? And that the first thing is surveys. You know we're customer feedback, it's its feedback requests, its feature requests, its MPS answers in quals. So that is the first place we started with LMS Because it was like a perfect application from it. I mean like I was talking to a product manager a day to large company and I mean it was a shocking figure. But he does what he said. He said he gets 10,000 feature requests a month from customers 10,000. He drops it all into an Excel spreadsheet and he searches for terms like ease of use or, I Don't know, export or integrations, and he's looking for trends. Now, by the way, no human being can look at 10,000 things and actually find a trend. No, that is simply not possible. And that is what AI is incredibly good at looking at 10,000 things and automatically. So that was one of the early areas that we kind of so when we came out of vision, as I talked about, that was one of the three kind of pillars of it. Pillar one is Like help product managers synthesize qualitative feedback, help them build evidence for the why behind what their engineering teams want to build. And I think you know core job to be done as a PM, like your job, is the answer. Why, why are you building this thing? Who's a four? Why are they? Why are we doing it? What did they ask for? And I think this, I think, is an incredible application and and so. So that was one of the first pillars.

Speaker 3:

One of the second areas that we had a lot of qualitative data or text. I should say content is. We have this in-app messaging platform. We communicate with users. Building guides takes time. It takes time.

Speaker 3:

So we were wondering okay, well, if a customer already has a document on how to use a piece of software, can we take that content and build Like little messages that guide people through it? Because you know, if it has this concept of a step-by-step, would it be smart enough to do that? Turns out it works pretty well. Works pretty well, like literally, you type in like what you want to do and it goes out to a knowledge Base and comes back and boom, auto builds it. No, is it done? Not done, but I say just 60% of the time, 75% of the time. I mean that is very cool, it's generative, it's like like it. Of course, that's a generative application that is Leveraging our API's, it's it's doing a bunch of other work and it is super, super, super cool.

Speaker 3:

So that was like the second pillar is we called it? How do you product, enable existing content? You know, no one's gonna know once we PDFs or a bunch of knowledge bases, you want it right there in context, but it's trapped in these other systems. So how do we untrap it using an LM? So that's the kind of second pillar and that solves a real pain because people spend hours and hours and hours doing this. So so that's kind of like how we started.

Speaker 3:

I mean another more recent one that we're building actually right, this second, which is in that same vein of kind of product enabling content, is localization. Like it is painful. Painful, is that a customer dinner on Wednesday night? The guys like that's painful. Like you, you Export things out, send it to a translation house, re-import it, check it manually. I mean it sucks. Let's just be honest, like it's terrible. But your national business, you want to put things in Customers language, kind of a must-have. Now we have features that were we're finishing up. Now that you like literally hit a button, it uses these amazing translation languages, all powered by.

Speaker 3:

LLMs that like give you really accurate views. Yeah, it's a pain like so right there, I know I said three things. I focus on the pain first.

Speaker 2:

Technology serves the pain and just happens to be a better answer than any prior answer that we had before and All of those pains sound very familiar to me as someone who is, you know, working on a product With those things that were already on your roadmap and now you're just approaching them in a different way with the use of AI or had you kind of like on, we can't really do anything with this because you know we can't automate it or there aren't the, there isn't the technology available to do anything Game changing in this area great question, great question.

Speaker 3:

I all your questions are great, by the way I just thought of. The answer is yes, so both. Is it the accurate answer? Yeah, like the the example I had of auto generating a guide from existing knowledge base, that was definitely not our roadmap.

Speaker 3:

We had not even thought of it when we realized it was possible and that was in some of the brainstorming sessions. And then the hackathon just did it, like, just did it. I think these are three. A hackathon, I'm like, wow, that is super cool, okay and super valuable. So that one is one where it was Kind of an organic, like yes, we should do this, this, this is, this is like a 10x feature, I love it.

Speaker 3:

The the other pieces, like like what we're on a roadmap, like like we knew people were, for example, I'm struggling product manager struggling to triage these 10,000 items are getting a month. Like that's been a Pain I've been wanting to solve for years and we've, we've, we have, we've done little things along the way to a certainly address. We made it easier, made it better. We we've had filters and this and that like so, yeah, we've attacked it for years. It's just, this technology is such a great solution to that problem. So it did inform, like how we rethink about our roadmap around it, but I don't think it changed really our strategy around that product. We know the pain. We know what we're trying to do for customer, for product people.

Speaker 3:

That one felt More like additive and, like you know, accessibility we've had accessibility in our product for a long time, and we know it sucks and it's painful, so I think now we have a better solution for it. I feel like a very natural. Natural answer is that yeah, so it's like it's both right. It's a combo, like sometimes this technology shows you something that you didn't believe was possible and then you're like, yeah, let's do that. That seems really, really awesome, but sometimes there's a better answer than what our previous answer was.

Speaker 1:

Yeah, so you're telling us stories about where it's worked Well, where it's helped. I know in my own experiments. Sometimes it goes a little skewy and Makes things up and doesn't work quite as expected. So I'm curious when, if you had a false start in using this technology, using AI approaches to things, what did you learn from them?

Speaker 3:

Yeah, so I'll tell like an interesting story and just very honest. So, as I said earlier, we started with our own machine learning models prior to these LLMs coming out. One of the first problems we're attacking was classifying NPS qualitative data. So you get in these long text and people want to know, like, what's the general theme or sentiment is about? And this is an area go back to jobs to be done, generally speaking, even at Pendo. So our own people would do is we would export all the quals, we would put them in spreadsheet, we would manually sign a theme to it. That theme would then generate reports that went to executives and even the board of directors would see this stuff and so that would tell us like, hey, our NPS has gone up or it's gone down. This is why that would help, frankly, be the why behind certain investments. Like, people don't seem to like our analytics these days, we're going to throw it to three teams on analytics and we're going to address this issue. So that's how we've generally used NPS as a sort of a measure of sentiment and like, for example, we made a boneheaded decision a few years ago around understaffing support to save money and guess what? Like support as a theme and NPS like literally skyrocketed. We pivoted, made changes and then all of a sudden it comes back down again. So that's the pain. That is the job to be done, but it is a manual process and everyone is complained about it.

Speaker 3:

So we started building our own machine learning models, and what we learned is that they were technically correct, but they weren't something I could ever put in front of, like our board of directors, for example, like it would parse out this large piece of text, and one of the tokens that came back. One of the themes was datum, the singular word, form of data, which I don't know an English speaker that actually uses that word. So like you just would never do it, because, like never, one piece of data is always more than one. But the machine picked the singular form because it was technically correct, I guess. So that was like, oh, we can't ship that. Like that, just we can't ship it. Like I would never do that. So we had to, actually. So we did a couple of things differently. So now the other thing that's part of this is right as we're ready to ship this, ga, chatgpt came out and we're like, ooh, did we just waste like a year and a half of development on this like homegrown thing, and so a lot of stress around it, a lot of conversations, and so we actually paused GA to figure out how LLMs did it.

Speaker 3:

And so we did a couple of things. I'll explain. One is when we see text for the first time, we have no themes available or we have a limited number of themes. We use GPT to pick the one word theme. And guess what GTP is better than picking the word data? It just is, it won't do that. It's smarter than like kind of generic models. But even what it picked wasn't awesome and it was not perfect. So then we decided to keep our own internal models and have them have basically unique training sets. So now I can go in, or our product ops team can go in and if it doesn't like a theme that was assigned, it reassigns and trains. So eventually, over time, you get to a point where it's as good as a human would do it, which is what you need.

Speaker 3:

It is like that is the requirement. The requirement is, if I'm putting something from our board directors, I want to really hit a button exports and it goes and we do no work. That's true. Automation, right, but it takes a bespoke trained model. It cannot be generic because it has, like we use the word usability.

Speaker 3:

I don't like the word UX. I don't want my board members to have to think what UX means. I want them to say I want to see usability, so that's a pendo thing. But like your company may want to use the word UX and show it in front of the, so like it has to be a completely terrible model. So this is probably like that. Frankly, we should have shipped in like a year. That ended up taking a year and a half, most because technology landscape was shifting under our feet while we're doing it. Plus, then we decided we had to build our own like version of training, which was, you know, a much heavier lift in terms of do it just so we can get kind of like honestly solve the job to be done, which is, I want to go from raw data to board side, and we won't get there if we don't have all these pieces together.

Speaker 2:

I think one of the things that some feels like a bit of a theme in what you're saying as well is like that shifting landscape of how technology is moving forward, whilst you're also trying to like plan on you know, plan your product and work on your product, but also being able to just play with that technology and see what it does under certain like conditions or like when you do you know, when you try and make it do certain things for you. Do you have, like you mentioned that you did a hackathon, which to me is like that sort of ultimate form of play, but do you do, how do you sort of embed that kind of culture of play and learning with, like new technology like this to ensure that you're like getting the benefit of everyone in the team, like understanding what the capabilities are and aren't?

Speaker 3:

Yeah, I mean, I think. So. One of the things we also did around the time of the hackathon, but then even subsequently, is that we weren't we decided we weren't married to an LL solution. We knew it was kind of Wild Well West. All these things are brand spanking new. There's different cost implications.

Speaker 3:

We have a long standing relationship with Google, but Google was behind open AI at first, so, like we knew we, so we decided we would play around with all of them and even to this day, we actually use more than one LL in our product and we have now learned, like, what prompts work well, for which LLMs which yield different results, and there's a constant experimentation and our product and their QA teams are spending a lot of time just honestly finding out what works the absolute best. So today I think yeah, I mean, I think it's a combo is the very, very short answer. I think over time we're evaluating whether we give customers the choice of which one they want. I mean, we're looking at in foreign countries where language does different enough, where we're almost warren, like, like a specific LLM for that particular customer, which I think is a pretty interesting problem domain. But you know, other things like privacy and things like that are also a big, big piece of it. But I'll go back.

Speaker 3:

I think one of the things that you, maybe people, may hear about this concept of prompt engineering, and prompt engineering is almost a new type of engineering, just involved. Like what you ask, how you ask it and what context you pass it. You know it dramatically adjusts the results, so so that that's led to like new innovations, like this whole rag thing which is essentially it's a combining some type of vector database or you know, on top of an LLM so that it like basically can, can capture enough context to send to the LLM, so that it reduces hallucinations and obviously improved accuracy. That's also had to build an experiment with our time because, like, of course, everyone sees this API like, oh, let's ask it some questions, I'm sure it's going to work fine. No, it didn't work fine. So through our Q&A we realize, oh, we should probably do this, we should probably add that, and so I think there's a lot of work and iterations gone into that.

Speaker 1:

You touched on this earlier, talking about how you added the things that only you have to distinguish yourself. What else can you? Is that the only way that you distinguish yourself using this technology? Besides, you know, as you said, these, these models, the GPTs. They're available to everyone with just a simple API call no-transcript. Is that the? Is it just your own secret sauce, or is there anything else that you need to do to when you're considering how you use this in your company versus your competitors?

Speaker 3:

Yeah, I mean, look, I always think that every product doesn't matter what product exists has a point of view, has a sort of any dose to it. So I think the way we implement, you know, the GPT, like technologies, it's going to be in a Pendo way, not a non Pendo way. We like to empower users, not be super magical. Actually, one of the things we haven't done is a lot of companies went in and threw in a like prompt, like UI and I'll be honest, I have a bias against this Like, it is just faster for me, when I know a product, to click a few buttons and to like, try to and like type out sentences. Like that to me seems painfully slow and I certainly don't want to speak to something because I mumbled sometimes or whatever, like. So we have elected not to do that.

Speaker 3:

If we ever did do it, I would think the way we would implement it would have to be like very explanatory in nature, like if you type in a question, we would tell you exactly how to actually solve that question without typing in a question. That would be the only way I would feel comfortable shipping it, because I don't want people sitting there typing questions. That feels very inefficient. So that's an area where, like, yeah, we are going to behave a little differently than say everyone else is maybe. I think one of the areas to we.

Speaker 3:

There is a heightened sensitivity. We serve large enterprises, so we, the first time we built anything that allows people to turn things on, turn things off and have a ton of customization. Just because so many legal teams and privacy teams are frankly so scared of AI, it's bizarrely ironic that everyone seems like they want it and everyone's talking about it. Hell, half people are saying they are AI companies, yet so few large businesses are actually allowed to use them. So we had to put in a lot of controls early, much earlier than we ever would for any other thing we build.

Speaker 3:

Because of almost the sensitivity around this thing, it's like you know, some people think that you know, it's gonna like hurt humans or whatever, so like it's a fascinating thing. So yeah, I think that and we really care about that sort of stuff. So, yeah, that that's primary, that we just a little more unique than everyone else probably did.

Speaker 2:

Do you? Do you think that it has a sort of you know, unnecessary bad reputation? Or I guess, in your situation, like do you feel confident enough that no, it's fine, it's not going to go wrong, it's not going to influence things and in a negative way, or or anything like that, you know, just trying to understand, like whether there's actually really a need for people to worry, or or it's just like bad PR in general on AI at the moment.

Speaker 3:

I mean, okay, this is kind of like a really broad I mean for how we're using it and for how a lot of companies are using it. As long as using like a version that's like professional and you're paying for it and you're using all the various privacy things, you probably should not be worrying about it. I can know, I can guarantee the way we're using it. Our customers probably should not care at all. It's not going to leak any data. Like there isn't. Like these are all like sandbox, like there is virtually nervous. Now am I saying society in general shouldn't worry about AI? I think it's pretty healthy to worry a bit. So, yeah, look, there's a lot of things that do scare me about AI. You know we like I mean I, we run a decent sized business and there's hackers that try to take advantage of our company all the time. Guess what AI makes them easier to try to do those things and that sucks. I mean it just means our security teams need to work kind of twice as hard because, like, it's easier to impersonate human than it ever has been before, and that worries me.

Speaker 3:

I worry about people creating things that look correct but aren't. That stresses me out. You know, how do we know what's real, whether it's a picture or a video or an article like? I mean, disinformation is one of the things that like scares me a lot today, because some people just aren't discerning enough to really ask the question is this thing real or not? They see something they think it's real and then that could lead to bad decisions after.

Speaker 3:

So I think there are plenty of things that worry me about AI. You know, thankfully we're just a software company trying to help our customers save some time. So, like for what we're doing, no, I think what you could say it's pretty safe. But, like, broadly, I think it's healthy that we as an industry, we as a society, like our right to like ask some questions, and this isn't like me advocating for any level of regulation or what have you like that's not also not my decision. I don't want to make those decisions, thankfully, but I think a healthy level of skepticism and like concern and thoughtfulness, yeah, I think that's warranted, todd.

Speaker 1:

One of the challenges that a lot of product people have is they're not always. They're not in the room with the senior executive team all the time. They don't always know what all the concerns are, what's going on, and sometimes people come and say why can't we go faster? Why can't we do this? Why can't we leverage AI to do things? What's the conversation? What's the? Is there any advice you have for product people to get in the heads of their executives and be able to educate them? This is what we need to know. This is what we're trying to do. This is how we can work better with you to execute the management vision. But you know we need some stuff. To what kind of stuff should they know so that they can have a better conversation with their, with their management teams?

Speaker 3:

Yeah, look, I think, make sure that they tie back what they want to do with AI to real business outcomes, to real customer outcomes. You know, I think you know if they came to me and say we want to do this and you know.

Speaker 3:

I mean, what's implied in your question is they're doing something instead of doing something else. It's a prioritization exercise. And that is like everything is a prioritization exercise, you know, and okay, we thought this last thing was the most important thing. And now you're telling me you want to build this AI stuff. Is it because you think it's cool or is it really more important than that last thing? Now, if you make a case that is more important than the other thing, okay, let's rearrange. I'm totally cool with it, but it's got to be a strong why behind it. I think one of the most interesting why's that even my board challenged me on when we when they thought we were going a little too slow and this is right around the time we did the hackathon it may have been just before is, look tech in general last year, maybe later in the last year and a half, as, like it's, the demand environment's gotten tighter, like you know, the whole end of zero interest rate period and like, in general, people are. You know, budgets are tightened, CIOs are saying they're spending less. Growth is a little bit less than the tech economy. I mean, it is less across the board, but you know what's not less AI. Ai is more so.

Speaker 3:

One of the business reasons, business outcomes for really investing in AI is this one of the few areas where demand is at all time high and if you want to like, you want to meet customers where they are. You want to like, adjust to, you know macro conditions. One of the things you can do is you know, see what's hot and make sure that you're taking advantage of it and giving it to customers. If customers when I build AI, you know, one of my investors said hey, look like, like right now, like what is what example? Did you something to the effect of like ice cream is down globally, but not chocolate ice cream. Chocolate ice cream is really, really hot. Make sure you sell that. So and of course I was a AI so like, so we made sure that we had chocolate ice cream on the menu. Like, really, really fast. And because there's a clear outcome and a lot of it was a creative conversation. Customers are ready for it, but they want to be. The thing that's interesting is they want to be partnering with companies that are going to be the leaders and not the laggards. So last thing you want to do is like, decide.

Speaker 3:

Well, next year is really our year for AI, because, guess what, whether it's adding a lot of value to customers or not, it's kind of irrelevant. We are learning a lot of your questions and hopefully a lot of my answers. That I'm partying is like we're learning, and I am not saying the AI pen of shipping today is going to be the AI that's perfect or like ultimately what our customers want. But boy, we're getting smarter and smarter and we're getting in people's hands and they're touching. I thought you build great product, you iterate and if you're not shipping, you're not iterating. So so I think us getting something out there fast and just knowing that we can constantly iterate and constantly improve in technology shifting, we're just, yeah, I mean I think we're setting ourselves up to be a leader and not a laggard in this technology, but not waiting. Not waiting until it's a mandate or requirement or people have to have it. So that feels pretty good and just.

Speaker 3:

But look, I mean it's a journey and I think, but the more engineers can really focus on the why and the business outcome, I think that always works and, given this is such a strategic thing, actually, one of the great things for engineers now is that it was like 60% of CEOs mentioned AI and their earnings announcements. You know, in the last quarters, up from like 50%, like a couple quarters ago. Like, if you're talking about AI, you probably do have the executives here Like this is actually something they want to hear about because their investors are asking about it. Everyone else is asking about it, you know. So if you have an idea, if you have something that's well formed, like, don't be afraid, like bring it up, because people actually want it is on their mind and that is a good thing. Going to an executive is something that's already on their mind. You're much like we're likely to get airtime and finding connection than not.

Speaker 2:

So we're nearly at the end of the interview. I've just got one more question for you. I asked chat GPT what questions I should ask Todd Olson about AI and product management. I think Randy probably did this as well, because he did the question prep today and most of the questions we've already covered. Randy, I'm just rivig you is like I know you didn't do that.

Speaker 1:

No, it's just, it knows me too well or maybe I can be easily replaced is what we're finding out.

Speaker 2:

There is one, though, that we haven't covered, which is what, and it will have to keep this one short, because we've gone over time and we don't want to take up too much of your time, todd, but what are your kind of snapshot potential future trends and innovations in AI and product management?

Speaker 3:

Well, okay, I think I touched on a little bit earlier already, but I think it's really this evolution, beyond just simply leveraging LLMs to looking at these tiny models, leveraging product data having more purpose built, domain specific and then using the same technologies and models for product management. That leads to probably tighter generative applications. That's the future. Like do I think that AI is going to replace the product manager?

Speaker 2:

No, no, and it's why.

Speaker 3:

I'm also really careful about terms I use when I refer to product management, even analytics. I was just literally speaking to a customer that kept using the word data driven. They can drive a product manager, I mean, or maybe it can, but it probably shouldn't. I mean, I say it shouldn't, data shouldn't inform a product manager, it's just an input, right? I think part of the magic of products and the products that we like, the ones that we talk about, the ones that are revered in our society, are ones that are built by humans, that have strong points of view, that are informed by data.

Speaker 3:

You know, there's classic interviews around Steve Jobs and they claim that he didn't look any data or do any focus groups.

Speaker 3:

He may not have done a formal focus group, but, like there's lots of reports of him like standing in app stores just watching people interact with technology. That is data, maybe qualitative, maybe anecdotal, but it is data. And but he had a point of view and that's why we like the products and I think that AI can provide a lot more evidence and a lot more context and make them a lot smarter, and they may get something back from AI and amaze it a light bulb off in their brain that sets them in the right direction and builds the right thing. But I don't know what's actually going to decide, or should decide on where we're going in the future. Now.

Speaker 3:

Now, the other area that I'm also passionate about so sorry, I'm going to go a little longer is I do think AI, with all of this data, could help lead to full personalization within software. That's an area that's, to me, like the ultimate answer is like when did it amazing if you went into any software and asked you a few things about you and then immediately gave you sort of a AI driven dashboard homepage whatever you want to call it that had the things that you want and then, if you didn't like something on that page, you could say I don't like this disappears, new thing comes back and it's learning about you. I think AI can enable that. I really do. And now what are those components? And that's where the product management magic happens. But I think the the architecture of applications in such a way that they are fully personalized. I think that's something I'm interested in. We'll see if we can build it, but but I think it'll take years, like a long time, but it's cool.

Speaker 2:

It sounds great. Todd, thank you so much for joining us. It's been really great talking to you today, all about AI.

Speaker 3:

Awesome. Thank you, it's fun.

Speaker 1:

Thanks, Ted.

Speaker 2:

The product experience is the first and the best podcast from mine the product. Our hosts are me, lily Smith and me, Randy Silver. Lu Run Pratt is our producer and Luke Smith is our editor.

Speaker 1:

Our theme music is from Hamburg based band POW. That's PAU. Thanks to Arnie Kittler, who curates both product tank and MTP engage in Hamburg and who also plays bass in the band, for letting us use their music. You can connect with your local product community via product tank regular free meetups in over 200 cities worldwide.

Speaker 2:

If there's not one near you, maybe you should think about starting one. To find out more, go to minetheproductcom. Forward slash product tank.