The Digital Project Manager

How Building AI Products is Different—and Why Product Teams Need to Evolve

Galen Low

AI is fundamentally reshaping how digital products are conceived, built, and delivered—and the shift isn’t just technical, it’s cultural. Jyothi Nookula, a seasoned AI product leader with experience at companies like Netflix, Meta, Amazon AWS, and Etsy, joins Galen to unpack what makes AI-native products so different from conventional ones, why building them demands new evaluation frameworks, and how product teams can evolve their skills and mindset to keep pace.

Whether you're dealing with unpredictable model outputs, shifting success metrics, or a team with uneven comfort levels around emerging tech, Jyothi offers grounded, real-world strategies for staying user-centered, experiment-driven, and confidently collaborative in the face of rapid change.

Resources from this episode:

Galen Low:

Is there really anything that different about the process of developing a product that has AI native functionality versus other products that might just happen into existing AI tech?

Jyothi Nookula:

When you are building an AI native product, you're dealing with three things that are different. First, AI is fundamentally unpredictable.

Galen Low:

What is the first thing you do as a leader when you notice that a team may not all be at the same level when it comes to understanding, wielding, and even just accepting emerging technologies like AI?

Jyothi Nookula:

I separate the problem into two distinct issues. The first issue is competency. The second is disposition. If you treat a disposition problem like a competency problem, you'll just make it worse.

Galen Low:

What are your top things that a product manager interested in developing AI products needs to have on their resume or in their portfolio to stand out?

Jyothi Nookula:

One is evidence of building something with AI, not just talking about it. The second thing is...

Galen Low:

Welcome to The Digital Project Manager podcast — the show that helps delivery leaders work smarter, deliver faster, and lead better in the age of AI. I'm Galen, and every week we dive into real-world strategies, new tools, proven frameworks, and the occasional war story from the project front lines. Whether you're steering massive transformation projects, wrangling AI workflows, or just trying to keep the chaos under control, you're in the right place. Let's get into it. Today we are talking about the future of the product manager, what it takes to develop AI products, how AI is being used to streamline the product development and release process, and what team leads can do when their product teams have uneven distribution of skills and attitudes around emerging technologies like AI. With me in the studio today is Jyothi Nookula. Jyothi has over 13 years of experience driving AI product and platform innovation at companies like Netflix, Meta, Amazon AWS, and Etsy. She also holds 12 machine learning patents and has mentored over 1500 product managers in their transition into AI roles through her education company, Next Gen Product Manager. Jyothi, thank you so much for being with me today.

Jyothi Nookula:

Hi everyone. So excited to be here today.

Galen Low:

I am excited as well and I've been like looking forward to this for weeks now. When you and I first chatted, first of all, when I was looking at your profile, I was like, wow, Jyothi is a powerhouse. There's a lot of brands and a lot of technologies in your profile that are envy worthy. And then I'm always a sucker for folks who are trying to help the next generation of any craft, level up in increasingly technological and now AI oriented world. So, and then when we chatted I was like, wow, we have so much in common. I sit on the project side, you sit more on the product side. But I'm really excited to dive into how things are changing and some of the things that you've learned along the way throughout your journey through AI and machine learning products. I know that you and I we're probably gonna hit some tangents along the way that will be deeply interesting and deeply valuable, so I hope we do that. But the project manager and me created this roadmap for us today. To start out, I just wanted to kind of get one big burning question outta the way that like uncomfortable but pressing question that I think everyone wants to know the answer to. But then I'd like to zoom out from that and talk about three things. Firstly, I'd like to talk about identifying and closing skills gap on your products teams. When dealing with products that leverage AI and machine learning features, then I'd like to explore some examples of ways that your teams have used AI tools in the product development process, whether that's in research, data analysis, design, engineering, user testing, or something else completely. And lastly, I'd like to explore what the future looks like for the product management role. What a resume or portfolio needs to have to even be considered for roles at places like Meta, AWS, Netflix, Etsy, and other heavy hitting AI forward brands.

Jyothi Nookula:

I love it. I'm interested. That's the latest hot topic. I just love it.

Galen Low:

Awesome. Let's start out with that big question. So my big question to like frame this all in is, I mean, it's around AI. Because you've had a lot of experience working with product teams to develop AI and machine learning based solutions for giants like Meta, Amazon, Etsy, and Netflix. My question is there really anything that different about the process of developing a product that has AI native functionality versus other products that might just happen to existing AI tech?

Jyothi Nookula:

That's a great question because this really gets at the heart of what's really changing in product development right now. And the honest answer is fundamentally yes, but it's not in the way that people think about it. So here's what I mean. When you are building an AI native product, you're dealing with three things that are different from traditional software or even products that just integrate with AI through an API. So the first is AI is fundamentally unpredictable, so your traditional product development. You write deterministic code, like if this happens, then do that. Or if I click on this button, it goes to this next screen. Every single time I click on that button, it will do the same thing. But with AI native products, you're working with probabilistic systems, so your AI feature might work differently each time it runs. What this means is your entire QA process, your edge case handling your reliability guarantees, all of that has to be completely rethought because you're not just testing if something works, you are testing if it works well enough, and it works consistently every time across a distribution of outcomes. That's the first thing that's fundamentally different is that unpredictability, that deterministic versus probabilistic. The second is we, you are now building products on shifting ground because the underlying models evolve in ways that you can't or you don't control. For example, a new model might change behavior. It may not be a breaking change, but changes behavior ever so slightly, or it expands capabilities or introduces some new failure modes. And these happen overnight at the pace at which things are moving today. So unlike traditional dependencies where you could version control your breaking changes, you could document updates. These model updates are shifting faster than we can even catch up with it. And the third, and this is a big one, is your success metrics have to be different. You can now just measure like, did this feature execute successfully? Because now you have to measure things like, was the output useful? It's not that it executed, but was it useful? Did it match the intent of the user? The question that the user asked? And how do you even define good for your use case? So you end up needing a much tighter feedback loops with your users and you have to now bake. These evaluation mechanisms right into product development, which is very different from how traditional product development is done. So those are the three things that are very different and products that just tap into existing AI, like if I'm just adding ChatGPT integration. They can still be like traditional integrations. You still have success metrics and things that you have to look at and evals that you have to keep a tab on. But here, at least there is a contract. But when you're building your product as an AI native product from the ground up, then that's totally different. And that's where these three things. Impact your product development even more.

Galen Low:

So funny 'cause coming into that question, I was like, I'm gonna ask this question and she's probably gonna say it's, yeah, it's kind of the same with a few minor things, but that absolutely changes the game. I love what you said about measuring intent and the measurement side of things, right? Because I come from the project world, you know, we've got our requirements document. It's binary. It's like yes or no. Did it do the thing that it said in the functional spec? But then there's all this ambiguity. A, because it's probabilistic and the outcome won't be the same every time. And B, because the underlying architecture is in some ways a black box and evolving itself on its own every day faster than humans understand. And I'm like, that's way different actually than how I think most digital products have been created, you know, in recent history. But it makes sense. Here's what I'm thinking in my head. You've got over a decade of experience working with machine learning and AI. A lot of folks are just getting started over the past two years. It's new to them over the past two years or so. ChatGPT kind of lifted the lid on how AI can be used, but it's been in software and in our digital products for a while now. And the thing in my head is like, because you watched right on Netflix, I'm like, that looks so simple and it's probably not right. That algorithm, the machine learning, like in the backend, the data processing, not just because everything in Netflix is like tagged with a certain taxonomy, it's not like related links on a website. It is actually about behavior and intent and it's easy to get wrong. It's easy to almost never find out how wrong it is until you get that tweet right. That's like, oh my gosh, my Netflix never gets it Right because I watched this. Now it's suggesting whatever, 18 seasons of Barney. Right.

Jyothi Nookula:

Even with Instagram reels and TikTok happens all the time. I would go and watch that one video of someone skiing and then I'm flooded with ski videos. Yeah.

Galen Low:

And I think it's something we take so for granted. I think it would be easy for folks, including myself to be like, I don't know. That looks pretty tame, pretty easy. Same trick, same skillset, but when you really unpack it that way, it's not really. I wondered if we could maybe zoom out a bit from there, because in addition to your rather impressive background leading digital products for brands like Amazon, Meta, Netflix, Etsy, the ones I've mentioned, you are also the founder of Next Gen Product Manager, which has among other offerings, other educational offerings kinda like a five week bootcamp to transition from regular, I guess conventional, I'll say product management into like AI product management. And frankly, the fact that your course even exists tells me that not all the conventional product management skills are really cutting it these days. There is a next gen or more forward thinking mindset and skillset that is needed. And while you may have gotten to like handpick your teams throughout your career, it stands to reason that like not everyone on your teams have had the exact same competencies and attitudes when it comes to AI and other emerging tech. I guess my question is like, what is the first thing you do as a leader when you notice that a team may not all be at the same level when it comes to understanding, wielding and even just accepting emerging technologies like AI?

Jyothi Nookula:

Yeah, and this is a very practical, like real challenge and something that I have dealt with a lot and constantly continue to deal with it. I've been a people leader managing teams of different sizes all the way from three to five to 10 to 12. I've been through that spectrum to the skill gap and the comfort gap with AI is something that I'm seeing is the widest that have seen for any technology or shift in my career. So here's something that I like to do first when I'm faced with this problem is I separate the problem into two distinct issues. So the first issue is competency where people don't know how to work with AI effectively. The second is disposition where people are skeptical or resistant or even anxious about it. Now you have to diagnose like, which of these two are you dealing with? Or maybe you have a combination of both, but it's important to recognize which problem you're trying to address because if you treat a disposition problem like a competency problem, you'll just make it worse. So like for competency gaps, my first move is to create some sort of a shared context through doing not learning. And this is the same ethos I teach my students even in my five week course of AI product management, where they learn through doing and not just. So I don't like to like send people to training or ask them to watch tutorials. Instead, I immediately like embed AI into our actual workflows and low takeaways. It's about integrating Claude or GPT to write sprint retrospectives or synthesize themes or like do some deep research, or craft user research summaries, something where people see the tool doing real work for them and not necessarily looking at it as replacing them within like two weeks. I ask folks in the meeting like, tell me about something that AI helped you to do faster, or Tell me an insight that you wouldn't have caught without AI, and then those shared experiences become your foundation for building competency together. So for disposition issues, which is more to do with like resistance or fear, I think the best way is to lead with curiosity and not with evangelism there. So I usually like in my one-on-ones, I like ask what's your actual concern here? And I actually shut up and just listen. Because usually what comes out of those conversations is something legitimate where they'll say things like, I feel like this is going to devalue my craft. Or I don't trust its outputs because I can't verify it fully or like I feel like I'm falling behind and it's overwhelming and those are real concerns. And that's when I have learned that you can logic someone out of fear, right? What you can do is to acknowledge it, normalize it, and then show them a path forward. For someone who is worried about things like say the craft being devalued, I'll show them how AI can handle the scaffolding, but how now their ability to like focus on these nuanced high judgment work that they never had the time for before or for someone worried about verification, I'd be like, all right, let's build the valuation frameworks together. So now they're expert in that AI quality assurance. The other thing that I have seen work really well is to immediately like identify these bridge people. In every team, there are people who are naturally curious about AI. They are experimenting with it constantly with innovation. They're practical, they're getting results. So I give these people like explicit permission to share like what's working well. So like in a low key way, like in stand ups do show intel sessions. So this creates like those peer to peer learning because when people see a peer or when they see their own colleague adopting it and getting results, they are more open to trying it out versus a leader telling them, go do this from a top down perspective. And the last one could, this could be like a spicy thing, is I move quickly to establish that fluency with AI is becoming a baseline expectation. I'm empathetic about the learning curve and I'm very patient with the process, but I'm clear about the direction that this isn't optional. Just like how we all needed to learn about agile process, or we needed to learn about analytics, or we needed to learn about, say, first designing this is part of the job now. So I have found that combining this high support with high standards, setting an expectation actually reduces anxiety. The teams that I have seen struggle the most are the ones where the leaders are ambiguous about expectations. They're also not providing real support. That's where you get resentment and because there's a lot of helplessness. So the goal isn't to get everybody to the same level immediately, but the goal is to get everybody moving in the same direction with psychological safety and practical tools.

Galen Low:

I love that. First of all, I love that separation between competency and disposition. I'm glad you took it there. And then as you were describing that, I'm like, this is almost like a blend of change management and team building merged together in real time. And you know, we have this tendency to think of change management as like a thing you do once your, you know, as a big change is rolling out. It's almost like this one time initiative versus this is almost like change management on the fly every day with a team. You know, to your point, supported, not just go learn this on your own because you have to and like without that clarity, but actually it builds that team, like the collective competency and the collective disposition of the team because it's a little bit flatter, it's a little bit more grassroots, there's more clarity and it's like, I don't wanna say like peer pressure, but like peer support. It's that we're sharing knowledge, we're sharing information. And I love what you said about some of the anxieties that are legitimate around AI. You know, feeling like you're falling behind, feeling like there isn't enough time to do things. Feeling like it's devaluing your craft, honestly, like probably no better way to at least find a path forward there than to see your peers actually deal with that and put it into practice in your day to day, not just like theoretical stuff. But actually you're doing it together. I really love that. I'm a huge fan by the way, of like learning by doing, so I'm really glad to hear that's a part of the way you teach as well. And honestly, I think that clarity bit that really resonated with me. You know, your standards are high, your expectations are high, but the support is there and the clarity is what will bring us forward. Because the decision of, is this just the passing trend and fad that you can ignore? Or is it going to be as normal as typing? We're like already past that, or at least organizationally, a lot of the folks that you've worked with, that decision has been made. You cannot ignore AI, so let's get there, but we need to like move forward together.

Jyothi Nookula:

Yeah.

Galen Low:

You mentioned a couple of things in there about almost like the little experiments, the pilots, to get people hands-on with AI to build their competency and their confidence. I imagine it's quite a wide berth of how that can expand and where AI can be used in the development of products that are intrinsically also AI products. I mean, could you give us some examples maybe of some other ways that your teams have been using AI in the way that they design and develop the products?

Jyothi Nookula:

Yeah. I can actually give you sort of concrete examples across the entire product development life cycle.

Galen Low:

Oh, love it.

Jyothi Nookula:

Starting with discovery and research. Right. So, using AI to dramatically compress insight generation. So like what would we do normally? We do a lot of user research, like say 15 to 20 conversations. So instead of spending a week identifying those themes, we feed those transcripts into Claude and ask it to like identify patterns or contradictions or edge cases. But here's the key, like you don't just have to take the output alone. We would still like review it with the researcher who reviews it, who challenges it, who refines it, so that human in the loop is very important. What AI gives is like a very strong first draft in an hour. Instead of like a week, and the researcher would spend their time on this high judgment work deciding which insights actually matter or which one challenges our assumption. What should we dig into next? Which researcher, like a user, researcher and product manager can like work together to identify those patterns. And the same thing with like support tickets, right? If your product is already out there in the market, the support volume is like crazy, always. So instead of having, like manually having to review these tickets to figure out those patterns, we analyze thousands of customer support conversations to identify what are the pain points? How big are these pain points? What's the reach of these pain points? And AI surfaces these patterns in a way that we wouldn't have got these manually just because of the sheer volume that comes through the support channels. And then as a product manager, now you see the user research, you see the support volume patterns. You are in a more empowered place to figure out what is a priority to go and fix. Which features should you actually prioritize on your roadmaps? Even in like say all the way into like documentation, for example, or communication, AI is actually eliminating that grunt work. We have seen folks using AI to write PRDs, summarize print reviews, and even create these stakeholder updates. And these are all the things that AI can create strong first drafts. One of my PMs said like, I used to spend like 30% of my time writing documentation, and now I spend 30% of my time editing and refining those documentation, which is way more valuable because now you have a good starting point to build on top of. And we're also using AI to make documentation more accessible. So someone can ask like, what did we decide about this? Like say, payment flow redesign, and get an answer synthesized from three different like slack threads to meetings and some PRD, right? So AI is also helping making that documentation more accessible. And so across the product development lifecycle, right? From like experimentation to like documentation to even like how we interact with engineering, where initially we used to like as product managers, we own and write, PRDs and PRDs are not going anywhere. PRDs still continue to exist, but there's this new component of prototype that gets added because why just give a list of stories when you can actually try it out and prototype it. Get a simple product, market fit, and hand over the prototype as an idea to engineering. So it's a new way of PRD is you have a PRD that captures the vision, the evals, how it should work, when it should work, what does good look like, what does bad look like. But then you also have this prototype that talks about the interaction, the interactability, and how that's a, the product. Needs to be imagined. So we are seeing it across the product development lifecycle.

Galen Low:

I love that you brought it there because I was talking with some product people just the other week we were debating whether PRDs, the product requirement document, is it dead? And this is when I was chatting with you know, she was saying no because you still have to have those thoughts and they still have to be good thoughts. And she had built a little app that helps you come up with those good thoughts and the basis for why your product fits the market and what features it should have and what are the priority. And then we tie that into prototyping again, still has to have that vision, like isn't going to come up with all the creative answers on its own, and that's good enough. You still need to have that strategic vision for what the product does, how it serves the needs of its users, how we're taking it to market. But I like that idea that the a) shift in somebody's job, right? Because I know a lot of product managers and yes, it's like documentation heavy, which also has become a superpower because as you say, now they're spending most of their time editing it. But it was already part of the process to have documentation in the first place, which means you can train AI on it. Versus all those people who didn't write anything down before. Now they have to start documenting in order to actually benefit from it. So in a way, product managers are ahead and I just love that idea of the sort of like first draft machines. I had a question though, because you kinda like double click on it in the project world, I know we have a lot of folks who are like, yes, it's great for a first draft. Are you finding that your teams are using just that first draft bot and then moving on to like the other parts of the chain? Or are they feeding it back so that they can improve that machine so that it might actually be a better and better first draft bot by feeding the results back in their edited versions back in?

Jyothi Nookula:

Yeah, so we call this one shot. People assume that it's a one shot thing with AI where you feed it and then it gives you an report and then you can run with it, which is seldom the case. It's just usually that you get a strong first draft, but then you irate, you put your thoughts into it, and then you feed that into like critique your positions or where are those areas that could improve refinement. And then it gives you its thoughts and then you'd be like this bit makes. Sense, but this other bit I don't agree. And it's like having this partner that you can continue to refine until you feel really good about where you're landing with. If you treat AI systems as a way to like give it something, get an answer, one single shot, and then just go with it, that rarely works. What I've seen value improve is when you iteratively work through it and improve it, which is why we say it's easy to teach how to do, but it's very hard to teach Taste. Taste is still something that a product manager needs to own.

Galen Low:

I like that taste. Do you find a lot of folks are you know, it's like, okay, now 30% of my job is not just editing documentation, but actually like talking to a robot? Versus like you said, like a lot of the product manager role is quite human, right? You're doing user interviews and research and yeah, you're gonna have like swath of data that you won't have an individual relationship with a human, and AI can definitely help there. But do you find that there's this like a loss of the humanity in product management from the perspective of somebody you know as a disposition? Maybe one of the like there as anxieties or concerns is that, well, I spend most of my time talking to a robot teaching a robot, and it's just not the same as like talking to a human.

Jyothi Nookula:

I don't know if it is as much as talking to a robot because like I said, if it is just you and that AI system talking, then that's different. But you still have to like convince your stakeholders. You have to talk to your engineering team. So product managers are in this hub where they have to connect with different teams. And so in a way it's more like having a helpful assistant with whom you can like brainstorm. And I've seen my product managers, my leaders and my peers actually use that. Like they're running and they're just talking with it to like brainstorm ideas. So it's less of like that robotic, more of like. Let's use this in a way to get started and figure out you have this one companion who you can like always 24 by seven, go and talk to.

Galen Low:

To play the devil's advocate, you worked at some places where I am reasonably certain that someone has come and asked you, Jyothi, why can't we just automate this process? Why does there need to be a human in the loop? Couldn't we just grab all of those support tickets, run it through an agent, have it come up with the prioritization on new features, develop the new features, and release it without anyone being involved. A, you don't have to tell me if that's happened or not, but you can. And also like, I guess maybe my question is how do you push back against some of the like tech first approach rather than a human first or user first approach, especially at that level that you've played at, right? In some of these big tech enterprises where there is pressure to be like, yeah, but couldn't we just use this tech and then figure out what to do with it later? How do you even go about pushing back against that and making the case for some of the human in the loop stuff?

Jyothi Nookula:

Yeah, and this is something I encounter a lot, even when I'm consulting a lot of these companies as well and with my students who say, Hey, this is the situation I'm in. How do I work? This is like a disease right now where everyone wants to start with AI and for the better or worse, I feel like we are forgetting about users, their problems, and we are starting with technology, which is very counterintuitive, right. The hardest through this, it's really hard to push back because the institutional incentives are all pointing the wrong way because you've got these executives who have read the same three articles about AI being existential. You have investors who are asking, what's your AI strategy on every earnings call? And engineers who are genuinely like excited, you see this hackathons. So the pressure to do something with AI is immense. But here's what I say is like going back to the product first principles is starting with the user and the problems rather than the algorithm. So I always say this even in my class as like users before algorithms, when someone comes really excited or an VP says, Hey, there's this new AI capability, or whatever it is that's multimodal, or like voice or whatever is a new thing. You can't win by saying no, but instead I rephrase, I flip this to like, oh, that's a very interesting technology. What problem are we trying to solve with this? And I make them do the work of articulating what is the problem and not in like a gotcha way or anything, but more like walk me through your user's day. Where does this fit in? And what are the users trying to do today? Or like, what's the alternative if this doesn't exist? And so usually one of the three things happen. One is they realize that it's a solution looking for a problem because the energy fizzles out naturally then because they're not able to like articulate a compelling user need. So no confrontation needed or they would find that it's a real problem. But technology isn't actually like the AI tech isn't probably the best solution. Maybe it's like better solved with better UX or better onboarding or fixing some other broken process, or not necessarily AI, but a easier like deterministic version of a technology. That AI is probably an overkill they get to realize when we do this exercise and other way it goes is this is the best case, right? They find a real problem where AI actually unlocks something new and this is gold and this is where innovation really happens. And at Amazon, one of the things that. We famously do is call working backwards process where we write a press release. So I, I make my PMs or whoever peers who I'm working with to actually write a press release for this product where they need to like talk about the impact of this product, maybe add a user testimonial in place and it's really hard to fake something as amazing when you go through that exercise. So that I have seen to be really helpful in navigating those conversations. But there will also be times where it's coming from a top down, like say the CTO has decided, and it's really hard to go to a CTO and fight with them for it. In those cases, I have learned to reframe it, to say, all right, if we are going to do this, let's at least do it in a way that's useful. Let's go find and solve a real problem versus having to fight against this particular use case. So then I shifted to like, this is great, but here is this other problem where it could be really useful. We have the data for it, we have a use case for it. The ROI, like, I worked through all of those proposals to say, yes, let's use it, but this problem is more urgent, this other problem. So these are like probably some techniques that have worked for me early in my career. I used to like think that my job was to like protect product vision from distraction. Now I realize my job is to actually channel energy, like bring enthusiasm about new tech over enthusiastic, channel them towards using it the right way, all towards these outcomes that really matter. The issue is not about the enthusiasm, but it's about how do we channel that into a way that's helpful for our users. So bringing that always back to those product first principles is like think about the user, the problem, versus starting with technology. Like you wouldn't start saying. I opened a Word doc today, what should I write? Like start, write or something.

Galen Low:

Where's clippy when we need them? I think honestly that was like a masterclass in a nutshell for navigating product politics because I think your perspective on the role, I think it's really useful. I do know a lot of product people and as a project person, I can also relate where you feel like you're the gatekeeper guardian, you're like the defense. And the pusher backer, right? The person who's going to say the gotchas and you know, really like put people in their place so that we can maintain whatever it was that we set out to do. I really like that idea of channeling energy though, because it can lead to good things. I was interested, your third point is that, okay, we're doing this. Let's at least make it useful I think is like, it's not a perspective I hear often, but I think it's refreshing in a way that like that's how I know businesses to work. At any scale. Sometimes you aren't given a choice and it doesn't always have to be, well, let me get that decision in writing so I can tell you, told you so later it can be constructive as well. It's like the energy can benefit and I come from a human-centric design background, so like I'm always excited to hear. People have conversations where they're bringing it back to user and user benefit. I think it's such a useful reframe, right? To be like just that gentle, what problem are we trying to solve and get them thinking about it too. It's even like collaborative, you know? Instead of like this defensive, how do I try and find a way to say no cloak and dagger politics. It's actually okay, well let's like reason through this. And find the best outcome. By the way, speaking of outcomes, that press release thing, I'm stealing it like it's 1000% the thing that A, like we do like to work backwards, but not to the extent where we get to the press release, but that is like the pinnacle of describing what you did and why and its impact. What a useful tool to get people thinking about the outcome, not just getting it released. Not just taking orders and getting the job done, but actually envisioning what outcome we're driving towards. What's it gonna do for people? What are we gonna be proud to say when we're done? That's super cool. I love that.

Jyothi Nookula:

Yeah I love that press release concept. I have always used it even after leaving AWS so many years ago. I still continue to like use that in my work. It brings a perspective.

Galen Low:

I love it. It's so useful. I wondered if maybe we could land out by talking about the future a little bit, because I think throughout this conversation, I think it's pretty clear that the product management space is shifting quite a bit. The products themselves are changing, the tools and methods are changing, and the expectations around technical understanding and business understanding and delivery strategy. It's also changing. What are your top three or four things that a product manager interested in developing AI products needs to have on their resume or in their portfolio to stand out?

Jyothi Nookula:

Yeah. Thanks for actually asking this question because this is a very practical question, and honestly, what I look for in a resume has changed dramatically in the last like two years. So here's what actually makes someone stand out right now. One is evidence of building something with AI, not just talking about it. I, as a hiring manager or recruiter, we have a spot to fill. It's not like a space for research or, so I want to see that you've actually shipped an AI native feature of product. Not like something to say, like, I've worked on a team that used AI, or I contributed to a strategy. What I'm looking for is like, what problem were you solving with this AI product? What was the AI actually doing, and how did you evaluate? What are some learnings that surprised you? And for people who have shipped AI products, I think this makes sense. But a lot of them haven't shipped AI products and they're like, how do I communicate this? That's why I say go build yourself. Some side projects, and this is something I tell even in my class, and we do a lot of hands-on projects. They actually build like a full portfolio kit by the time they come out. And I actually tell them, don't we start them as projects, but you have to convert them into projects. It's not like you finish it and then you close your laptop and go away. That's a project. You have to convert it into a product, share it with your friends, with your community, have them use your product. Ask them to give you feedback, and so then you go and improve upon that feedback. Maybe slap a stripe integration on a charge. $1 or 50 cents doesn't matter, but make it a revenue product. If that's how your product is slated to convert it into as close to real product as possible, that creates a lot more impact on your resume than going and just talking about, like, I worked on an AI project, for example, which is why many people could be doing projects. You have to build products, whether that is at work or away from work. The second thing that's important is showing that technical fluency. Now, what I mean by that is you don't need to be an ML engineer. I don't need you to code, but I need to see that evidence that you can have credible conversations with engineers about how these systems work. So on your resume, this might show up like evaluation frameworks that you have used if you're. Or how you have gone about doing bigger testing. What are the trade-offs that you have navigated? Be it latency versus quality, or cost versus capability, or what specific model types of architectures you have worked with. So the language you use matters because if your resume says, I leverage AI to improve user experience, that tells me nothing. But instead, if you said, we have built this rag architecture to reduce hallucinations and customer support responses. It has improved this accuracy from X to Y. Now I know what you're building and the test I usually use to see is, can you explain this to an engineer, why we should be using technique A versus technique B for this use case? And similarly, could you explain to a business stakeholder why that technical decision matters and how it impacts business outcomes. 'cause you as a product manager, you are at this intersection and your main role is translating technical possibilities to business outcomes and business value. Converting that back into technical frameworks, that's the AI PM that performs the best. Last but not the least, is having that experience to navigate ambiguity and this rapid itration. So, like I said, AI products are different. Models get updated, capabilities evolve, so you need to be comfortable with ambiguity and that should show up in your resume. Like for example, if you tell me that you launch zero to one products or fast moving environments, now that tells me zero to one is you have to go through that ambiguity. Or the experimentation mindset about that rapid prototyping, testing and learning even for your side projects when you converted into a product, then that's when all of this also start to formulate. So these are like a few capabilities that I think will stand out or help and applicants stand out. What I'm not looking for is definitely not a buzzword soup, right? Like I don't need anyone to say like, I leverage I ml to drive synergies and optimize tells me nothing. Or going and doing six different certifications on Coursera or passionate about AI. Like these are not things that will stand out. So of all the things, if I have to tell you what is a pattern, the pattern is a learning velocity. How can you move fast? How deep and technical do you go? Do you understand how these systems work together? So if you're trying to break into like AI product management and don't have direct experience, you can create it. You can build something. Nobody's stopping you from building anything. It's okay if your work doesn't have those AI opportunities, but nobody's stopping you from building. Or you can write about what you're learning as part of what you're building. So many challenges when you start building. And contribute, create case studies of AI products that you admire, do tear downs, and then figure out how would you improvise it? Because the barrier to entry is really low now. You don't need to be a quarter to go build your idea. Right? And also in the industry, not many people have 10 years of AI experience or something like that, so everybody's figuring things out together. What really separates is people who are actually doing the work to figure it out versus people who are waiting for permission.

Galen Low:

That was such a good description of, you know, that conundrum. I'm sure you hear it from people all the time. Oh, but I'm a product manager, like I shouldn't be expected to code. But there's this middle layer, and I think you explained it really well of like, no, maybe you don't need to code. You do need to have the vocabulary to do this translation, to understand the sort of business and user need and the sort of technical implications. And you do need to have this sort of builder's mindset where you understand the technology enough to know where there are friction points and what could go wrong. And it's not just a you know, I did what I was told working at Big Tech Company X. Therefore, I'm a good hire because to your point, as a hiring manager, you're like, I don't have any idea If you can slice through ambiguity or if you understand the user or if you can speak the language of the cross-functional teams that you are working with. All I know is that you worked for big company X in a certain capacity, and you could have just been the person who's just kind of following somebody else's lead and not necessarily being daring and being bold and learning at speed, and having that velocity and openness to just engage with it and let it improve your craft instead of being like, that's somebody else's problem.

Jyothi Nookula:

That's why I say don't wait for permission, just do it. The barrier to entry is very low now to try.

Galen Low:

Jyothi, thanks so much for spending the time with me today. It's been so much fun. Before I let you go, where can folks learn more about you?

Jyothi Nookula:

Yeah, you can find me on LinkedIn under Jyothi Nookula. You can also visit nextgenproductmanager.com where you can learn more about the courses I offer across AI product management, agentic AI and PM Accelerator.

Galen Low:

Amazing. That's awesome. I'll also include those links in the show notes for folks listening or in the description for folks watching. Jyothi, thanks again so much.

Jyothi Nookula:

Thank you so much. I had so much fun.

Galen Low:

Alright folks, that's it for today's episode of The Digital Project Manager Podcast. If you enjoyed this conversation, make sure to subscribe wherever you're listening. And if you want even more tactical insights, case studies and playbooks, head on over to thedigitalprojectmanager.com. Until next time, thanks for listening.