Prime Venture Partners Podcast

This Startup Doesn’t Just Use AI, It’s Run by AI | Xiaoyin Qu

Prime Venture Partners: Early Stage VC Fund

What if your product manager, designer, developer, AND CEO were all AI?

In this game-changing conversation, Xiaoyin Qu, Founder of HeyBoss, joins Pankaj Agarwal (VP @PrimeVP_in) to share how she’s building a startup where software is shipped in minutes, led entirely by AI agents.

Before founding HeyBoss, Xiaoyin was a Product Manager at Facebook and Instagram. Today, she’s scaling her company with support from the OpenAI Startup Fund.

What you’ll learn:
🤖 How HeyBoss evolved from a kids' gaming idea to an AI development agency
👩‍💼 Why Xiaoyin stepped away from day-to-day operations and let “Astra”, their internal AI, lead the team
🧱 The infrastructure behind AI agents that ship full products from a single sentence
📊 Lessons in product design, QA, and customer feedback loops — all powered by AI
🧠 Why “taste,” strategy, and human intuition still matter (and always will)

⏱ Timestamps:
00:00 – Introduction
01:00 – From Stanford MBA dropout to COVID-era startup success
04:30 – Building the first AI-run dev agency
06:40 – Meet Astra: The AI CEO
08:46 – How AI manages product, design, dev & delivery
14:53 – Prioritising quality over speed in an AI world
21:49 – Humans vs. AI: What still needs us?
33:19 – The one human skill AI can’t replace

💡 Key Takeaways:

How to structure teams of AI agents to deliver finished outcomes
Why non-technical founders are a massive unlock for AI-driven products
What the future of work looks like when humans and agents collaborate
Tactical lessons for founders exploring AI-powered SaaS

This is not just a conversation - it’s a first-hand look at where the future is already headed.

📌 Follow us:
LinkedIn: https://www.linkedin.com/company/primevp
Twitter: https://twitter.com/primevp_in
Website: https://primevp.in

🔔 Subscribe for more founder-led conversations on tech, innovation & startups.

#HeyBossAI #XiaoyinQu #AIStartups #FutureOfWork #ArtificialIntelligence #StartupLeadership #TechInnovation #AIxProduct #AIAgents #PrimePodcast #IndiaStartups #VCInsights #WomenInTech #PrimeVenturePartners

Speaker 1:

Drop out from Stanford MBA to go start your first company my first job was working at Meta and Instagram as a product manager.

Speaker 2:

I'm a user type A sentence. I basically resigned as a CEO and we make the AI agent, astra, the AI CEO. It's going to be a little bizarre for AI CEOs to manage a real human. Ai can work 24-7,. They speak 30 plus languages.

Speaker 1:

It's my pleasure to welcome Javin, who's a serial entrepreneur based in Valley, but I am more excited to talk to her because, you know, in a recent startup she basically replaced herself with an AI Right, so we'll cover that and a lot more. So welcome Welcome to the pod, javin. How are you doing?

Speaker 2:

Thank you so much for having me. I'm Shaoyin. I'm the founder and ex-CEO of hey Boss AI. Really great to be here.

Speaker 1:

Great. And so you know because before you started obviously you have been in the Valley, you have been in the tech ecosystem for you know many years Would love to sort of understand your journey overview and what kind of inspired you to, you know, not just start up but actually drop out from Stanford MBA to go start your first company, Right? So we'd love to hear the journey.

Speaker 2:

Yeah, so I was born and raised in China until I was 18. I went to the United States for college. I studied computer science and economics. After college, my first job was working at Meta and Instagram as a product manager. Actually it was 2015. It was like 10 years ago. I was an early product manager at Instagram and mostly focusing on growth a lot of video. Back then video was like a very cool stuff. It was like a really new space for Facebook.

Speaker 2:

And then after that I went to Stanford for MBA. Actually, after a year I dropped out because during the summer of my MBA first year, you also had MBA. So you know, during the first year most people would get an internship and work somewhere. I just chose to, you know, work on an idea. Back then it was a virtual events idea. So it was 2019, before COVID. There was no virtual events.

Speaker 2:

My mom was a doctor from China. She had to travel internationally for a conference in the US and she doesn't speak English well, so it was a pain in the ass to come to the United States for this conference. So I thought, okay, what if we can digitalize the whole experiences, make it a live stream and make it possible for other people to attend, and that was 2019. We got some money from Andreessen to test out the idea, and that's when I dropped out. And then the month that we launched the same exact month that we launched the first version COVID hit. So all of the conferences got canceled and our business just basically off to the races and like exploded immediately. So it was Timing did not have been better. Right, it's a great timing, it's great timing, but then after the COVID, it's the worst timing, because then all the conferences went back to in person. That's when, you know, our growth kind of stopped a little bit and then we chose to sell the company in 2023. So that was my first venture-backed journey. We raised a lot of money and then we like sold it. It's okay outcome for me. So then, like end of 2023, early 2024, we started this current company, haybox.

Speaker 2:

Initially it was not called Haybox. It was a gaming company for kid and we were using those AI agents. Initially, astra is our AI agent to help us build games for kid. It was an educational game studio, so we used AI agents to help us brainstorm game ideas and then, obviously, we trained the AI to be better and smarter and now the AI agent can build those games. In addition to brainstorming, they can develop those games. It is really around like three or four months ago. We further. Obviously, as the models get better, our agent flow is getting better, our ability to train AI gets better, we realized the agent can not only make games but also website and application and that is a 100x bigger market than the kids' gaming company. So that's when we decided to actually have our agent be a separate product focusing on all kinds of development for websites, for apps, for people, and that was like three, four months ago.

Speaker 2:

We have this company called Havas, which is the same exact company, just under a different name. I would say we started off as a co-pilot, so people can. It's always for non-technical people, so everyday people want quick with an idea, a website or an app. But it was initially in co-pilot so people can use natural language to make the AI code for them. That was like a few months ago and last week.

Speaker 2:

The biggest announcement is that we have officially became an autopilot, meaning the entire process from user type a sentence, so user just need to type one sentence. There's AI product manager, there's AI designer, there's AI developer, there's AI content writer, ai marketer. They will work together to give you a finished product in nine minutes, so it's like everything will be done. So we officially make the entire process automated, which also makes my job disappear, because I realize it's much, much easier for the AI to manage all the AI employees than me managing all the AI employees. So, as a result, I basically resigned as a CEO and we make the AI agent Astra the AI CEO and she's managing six AI employees to deliver the entire process entirely using AI.

Speaker 1:

Super, super interesting, right. So you know, while the world is sort of still trying to figure what to do with agents or how to build agents, you have this company which is running with the help of these swarms of agents, right? So I'm very curious. I get it that you started Estra, or you launched Estra, as sort of an agent or a co-pilot for, let's say, these non-technical customers that you had At what point, or you know what is the evolution and at what point you kind of started thinking, okay, maybe you know, we don't even need any humans in the company anymore and it can just be done by agents. So, just curious to understand the evolution, for you to really say, okay, my job, I'm replaced in a way, right, so you're taking some confidence, right? How did you?

Speaker 2:

get there. Yeah, so I mean right now we are the world's first AI-run dev agency. So the type of things that we do I'm not saying that we can build everything entirely without any human, but for our use cases, which is building website and applications for your business, for your online business, for your personal website, web application, for your community, all of those kind of stuff historically you will hire an agency or a freelancer in Fiverr to build the entire thing. We can entirely replace that with human. I still think if my job at Facebook as a product manager, as a software engineer at Meta, probably wouldn't be entirely replaced, but for our use case, we have officially replaced all the humans in the loop.

Speaker 2:

I think the biggest evolution comes from when we first realized we have to target. We're targeting non-technical users. People always say that non-technical user means no code, but we realized in the past few months that no code is not means no technical, because for a lot of the app, you still need to understand what is front end, what is back end, for example, bolt or Lovable or whatever you know, replit. You still need to understand what is a database and most of our users do not understand that. So we basically need to translate human language as in like I want to build a website for my fitness coaching business into like what does that mean? You need a contact form, you need some backing from it, but user doesn't want to know that. They just want to know that my website with a contact form works, my coaching business with a payment works. So essentially, we realized we need more than just AI engineer. We started off as an AI engineering co-pilot for people to build stuff without technical background. We realized, no, no, no, engineer is not enough. You also need an AI product manager, you also need an AI content writer, you also need an AI designer so they can work together. So our user can just say I want to build a website for my coaching business and then everything will be done.

Speaker 2:

So that's when we realized, okay, first we can't just stop at AI agent for engineering. We have to also build the other roles the product manager, the designer. Eventually we realize, okay, now we have all the roles that a dev agency have. We also need a way to manage all those people. Right, because how does the AI designer know what task to do? How does the AI designer know that's good enough or bad enough? We need an AI CEO to lead those people, assigning tasks and evaluate their work and improve their work. So that's when we have the AI CEO that manages the team.

Speaker 2:

So essentially, right now, the current flow is that you as a user, just type one sentence I want to make a website for blah blah blah. Then you can see an internal Slack group where all of those AI agents are going to have a conversation, brainstorming. It's like the AI CEO will be like okay, let's make a website for this podcast, any ideas? The designer will be like I think we should do the cyberpunk style blah blah. The seo person will be like I think we need to validate those keywords so you can see the entire discussion of the ai team, because we realize that is the only way to make sure we can design a good end result for our user that is not technical. So I come from that evolution.

Speaker 1:

Yeah, makes sense so I get some of these sort of vertically focused or functionally focused AIs you said AI product manager, ai, you know, seo content writer or AI designer. They're sort of trained to. You know, have a, have an understanding of that particular function, right. But when you talk about Astra, right, they are sort of managing the you know Astra?

Speaker 2:

I guess that she is managing Astra is the name of the ASEO.

Speaker 1:

Yes, you know, astra, I guess that she is managing. Astra is the name of the seo. Yes, but for astra, who has to sort of understand this varied context right of I don't know half a dozen agents, right, how does she make decisions, right? And, um, you, of course you know there is like a judgment involved. It is subjective, right, as a human, of course you were the ceo before her, right? So how did you sort of assign those uh skill sets, uh, uh into into an ai agent? I'm very curious to understand that.

Speaker 2:

And yeah, so I would say, in terms of a job description for the AI CEO, astra, there are three things that typically a human CEO would do. Number one is managing customers. Number two is managing employees. Number three is improving product, improving your business, right? So I think for her she's doing all three right. So, number one in terms of managing customer, she can directly talk to other customer. So customer directly tell her their need. She speaks 30 plus languages, work 24, seven, work in parallel. I cannot replace that.

Speaker 2:

And the second part is managing employees, right? So, like in that case, it's like all the AI agent that does everything. And then that means like, if you have a task, if you have a high level need I want to build an app for my podcast followers Basically, that the product manager needs to turn that into a product requirement document, right, and then that's like this feature needs to be done by this designer, and that is the seo related thing. And so astro, basically assigning those tasks for different people and see the performance of those people, right, and then they need to know, okay, are we good? Is the design good enough? Do we need to improve? So is it? That kind of process is managed by astro.

Speaker 2:

Uh, and the third piece, more like now, the customer give us a lot of feedback because we basically, right now you state an idea, we build it end-to-end in nine minutes. Then you can choose. Customer can choose. Either they can make us host the entire thing they're done, which we also host them for them, or they can say I want to change, I want to add feature where I'm not happy with this. So essentially, we have a lot of data on the feedback from the customer. What do they like, what do they not like? We also have a data on does the customer eventually decided to publish the thing or let us host? So we know the end results good or not. And then Astra can use those data to understand what's missing right For a certain thing. If the customer wants something we're always bad at that and they can try to self-improve and make it better themselves.

Speaker 2:

Sometimes they may need human to be involved. For example, when it comes to data privacy and data transparency, we're very careful. The human are very proactive, because other things are. Sometimes we wait until astra to can escalate to us. So I would say those are kind of the general flow, um, but the most important thing for us is that we are delivering a final result, which is different from all the other ai coding company.

Speaker 2:

They are like ide, so they you prompt, they deliver some tasks, they finish a task. We do not finish a task, we deliver a final outcome. So that means number one users do not like to write prompt. Our users hate writing prompts. They just wanted to have one prompt and be done with it. So we need to. Essentially, astro needs to write the right prompt to get it done. That's number one. Number two is that we need to deliver the final outcome. Are the customer happy with that?

Speaker 2:

And different vertical have different success. If you're a podcaster making a website for your fan, what makes that website good? Maybe design is important, but also fan engagement is also very important. It's very different from the success criteria of a restaurant who maybe want to showcase their beautiful food. So the criteria of those websites even though they're all websites, they have very different vertical knowledge and vertical success metric. So essentially, astrid and her team needs to do the research to understand what makes that vertical convert better. So they actually have ability to do some research and look at benchmarks to understand what they can do to optimize for that particular use case. So everything we do is personalized for those use cases, focusing on the final outcome. So then people will use us to host the product and we can make more money.

Speaker 1:

Got. It Makes sense and you know, if you look at over the last few weeks right, mcp came about. Google recently launched the A2A protocol right for various agents to talk to each other, but you have been doing this before. All this sort of became a standard thing and, as I mentioned, a lot of very sizable listener base of this podcast are technical founders who are trying to figure how to make some of these decisions. So, before you move on to some of the other aspects of building in AI, would love to get a sense of what are some like hard infrastructure and system design choices that you made to get this. Multiple AI agents work together and, you know, eventually deliver something, and I saw a bunch of your videos. It's a fantastic product delivered in nine minutes, right. So it would have taken like some very you know, core sort of design choices, so would love to maybe get a broad overview right. Could be helpful for the founders who listen to this.

Speaker 2:

Well, so I think that there's a few things.

Speaker 2:

Number one is that we focus on the use case first. Right, so our use case? I would say half of them are non-technical people. They're just like mom and pop have some need for website or apps Today. They hire a freelancer. We would entirely replace that. The other half are probably product managers and some engineers, some designers, and they're trying to build some prototype really fast, but they're typically zero to one prototype where they're typically adding a new feature, and so for that kind of case, essentially they want a good outcome which is different from, maybe, cursor.

Speaker 2:

You come up with a task, they deliver that task. We're less about that, we're more about like. For product manager is more like okay, step one, step two, step three here are the three screenshot. Step two people don't convert because there are too many buttons, and so when you talk to cursor, you have to say exactly what button to increase or decrease by what. For us, you just need to give us the three screenshots and say number two people don't convert. We think there are too many buttons. Just use one button and we'll figure out which button to keep and which other button to remove. And that also means the design needs to change, the content needs to change in order for that button to be removed. So we will figure those out.

Speaker 2:

And that is very different from Cursor, which are more like. Here's your task, finish the task. Ours is like here does it really solve the problem for user to communicate better? So that's for the product manager and for the mom and pop. It's like does this website work? Is it bug-free? Does that help them charge people? Is there a lead form? They can see the customer form. So that's even bigger scope in terms of final result.

Speaker 2:

So because of that, we're always optimizing for quality over speed. So obviously we're fast enough that we can do nine minutes, deliver a final outcome, but we're not as optimized on a task basis. If you benchmark one problem, how long does it take us versus the competitor? Because we do so many things within one problem. Typically it's slightly longer than a task-based model in Cursor because we're delivering the outcome. So I guess we're always going to focus on improving speed, but right now we've always been focusing on quality first over speed, because our users want to see a good outcome.

Speaker 2:

So that's number one. Number two is we realize that a lot of those outcomes cannot be simply delivered based on engineering tasks. A lot of them is a comprehensive task based on product management, based on content, based on design. So we've always been focusing on figuring out what are the job functions needed for this task to deliver a really good outcome, and so then we can loop in the right job function to get it done. That is fundamentally very different from Cursor, where you're focusing on engineering tasks. Okay, you know where the file needs to be changed. For us, there's so many people that need to be looped in and there are so many evaluations Like designer check is that good Content? Writers say is that clear Product manager check is that a smooth user flow? So there's multiple checks and balances among different job functions to make sure the outcome is delivered. So I think that's kind of like our eval is more complicated because we're focusing on the final outcome versus a task-based model.

Speaker 1:

So I think- it's very interesting, right, like delivering the final outcome in like less than 10 minutes. You've sort of set up a high bar for yourself. So I'm curious, like, how does the QA of it happen? Or is it sort of baked into the whole process, right, because at some point you know you need to do thorough testing, right Before you roll out that product to the customer? Right? Nine minutes, or. Everything is happening sort of in parallel, right, not sequential. So is it fair to assume that at every point somebody or there's a separate agent maybe who's checking that? Okay, it is working, at least from a workflow perspective, at least from the key objective of this product perspective, it is working fine. Of course, there could still be some, you know, subjective changes. For example, the customer wants a different color scheme, or you know what have you. But the key objective of the product is getting achieved. Is there sort of baked in into the whole design process during those minutes?

Speaker 2:

Yeah, that's correct. So essentially we need to. So number one is we basically need to identify what success looks like for this use case and making sure we're delivering on that. So that also means like we basically need to use all the right tools. We actually have our own database. We have our own backend. We have our own integration. If you want to build an AI app with OpenAI integrations, we directly pay OpenAI. You just pay us. So you could imagine there are obviously we are basically like a black box, but we do everything right Versus like Cursor, you have to get your own API key, pay multiple vendors to get it done.

Speaker 2:

So I think for us, obviously it's more challenging that we have to figure out what to use in what time and to make sure we deliver a good outcome. But to some degree, the downside to that is that if the non-technical user is asking for something that our current capability does not support, then maybe they want to build some specialized fine tuning model that we currently don't know how to do. Then we're going to have trouble satisfying that customer. But because we're focusing on certain use cases, I think we're being pretty thorough when it comes to website and something more common community-like website or applications would be pretty pretty good with that. But if you're building something very advanced, I think you're going to realize we have some trouble because we probably haven't learned that skill yet. So I think there's definitely a trade-off with that.

Speaker 2:

But high level is, like you know, for us we basically need to loop in everything and making sure we deliver a final outcome, and that also means, like when we face a bug, the AI needs to know how to solve those bugs.

Speaker 2:

Obviously, we may still have some bugs, but I think, compared to our competitors, if you can see, that we actually have a bug agent that solves bugs. So making sure when there's a bug they actually know how to solve it and they can proactively identify issues and solve that. I can't say we're 100%. Obviously there are still things you can improve, but we want to deliver the outcome that's bug-free to our customers. Rather than you know, typically when you do in cloud, they say you face an issue, you have to click a button called fix issue. Right, like we realize, at least for our user, especially the non-technical one, when they see a bug they just turn away Because they're not used to see bugs in their life. They're used to hire the agency. The agency gives them the final result. So once we have a bug, we lose the customer, which is why we have to kind of auto-fix as well. So essentially, we have a QA, we have a quality insurance person and a bug-fixing person in that team as well.

Speaker 1:

One last question on hey Boss, and then we'd love to get more broader views from you on AI, right? How do you manage Astra? How do you give her feedback? How do you manage Astra? How do you give her feedback? Is she doing? I mean, you're still in the board, right? You're still so she reports to you, right? So what kind of systems you have in place?

Speaker 2:

that, okay, astra you need to improve on this or that? Yeah, so I think there's a few things that, for certain factors when it comes to data privacy, transparency, user trust I am proactive. And when it comes to execution what is the user, what are some of the product that we can improve she's proactive. So, because you know, I really don't want her to drop the ball when it comes to privacy and safety and user transparency. And I realized that there was a tendency for AI agent to focus in on the final outcome, ignoring the process. So, for example, you probably talk about the. You saw our internal Slack channel between AI agent where they have a discussion to show you what they're doing. That was not there.

Speaker 2:

When Astra proactively thinks she's always going to be like think algorithm and deliver the result, she's not going to think how to make sure human understand. So that is something that I proactively say okay, we need to add that we need to translate what they're doing in code into a way that the everyday human understands. So that's something that I have to be proactive. But when it comes to here are the type of things that users are not happy with that maybe AI can learn more or maybe human can help. That is something that I think she has way more data and she can work way harder than me. She's just going to be proactive.

Speaker 2:

So I think for me, managing means what are the aspects that I need to be proactive versus AI? And then that's number one. Number two is how do I make sure that there is the right information that AI is getting? So what kind of information does she have access to and what kind of information do I have access to so that I can trust her input? Because I know that with the right input, she can deliver the right output. But I need to make sure she has the right input.

Speaker 1:

Yeah, makes sense. I mean I think in that way astra seems to be the the most amazing reporter you can have in the world, right, that you just you just have to like give her the right input and you are like almost sure that you know you'll get like the right, right outcome or right output from her.

Speaker 2:

Very interesting, yeah our company was an AI agent. The core business does not require a human, which is why it's uniquely okay to have an AI CEO. But for other companies. If you're an offline business, you're serving customers in person. I think maybe the AI CEO is much, much harder Because all our data is online. We have all the data and the entire process is done by AI. So I think I'm just on a better spot, that I can use AI SEO that way.

Speaker 1:

Exactly so. Do you not have any human employees at all? Or is there somebody who's sort of doing the sales or bringing in these projects? Right, because if you're a dev agency, somebody needs to bring these projects for you to work upon, right? So is there any human at all?

Speaker 2:

Yeah, Well, I mean like so we used to have human employees. We have engineers, we have designers. They're focusing on the customer. We have changed all of those roles so no one is really developing or designing for a customer. We do have human advisors on the development and design side, making sure you know whatever thing the AI cannot solve or they're not doing the right way, we make sure they deliver right. We make sure they deliver in a way that builds human trust. So we do have people. On that we're still debating.

Speaker 2:

There is some heated debate within the company. Do we need to have human support people? Because historically, obviously we have all this knowledge base that AI can just answer the question. But sometimes people just want to hop on a call, talk to a real human. They're willing to pay way more for that no-transcript first version for free. So there's no test and trial. You just like input your ideas if you like it and you only pay us more if you like the idea. So I think the marketing has basically changed from like phone call with customer to build trust, so you can build the first version to hey, here's your first version for free. Do you want to continue working on it and pay us. So from customer standpoint, I think the go-to-market is different, so we don't necessarily need to have the human hop on call anymore. When the customer just get a free version, they can decide.

Speaker 1:

You know, yeah, makes sense, commentary on the broader debate on this topic.

Speaker 1:

Right, that, how should AI be perceived in the age of this?

Speaker 1:

You know, knowledge work, right, where a lot of at least you know I would talking personally in investing, right, a lot of my time in the day is kind of earmarked for, you know, researching something, reading something, all of which AI can sort of do much better than me, right? So if you were to make a more broader sort of comment on how should we plan, or, you know, look at AI, you know, maybe augmenting some roles, maybe replacing some roles altogether, right, how do you see that pan out and how do we really, as human race, continue to benefit from it rather than, you know, suddenly be in a situation when we don't even know, right, what we are supposed to do for, you know, maybe, fulfillment, or, you know, at a very less, you know, to get incomes, right, it's a fundamental question, right, because whenever I ask that question in any forum, people sort of always draw parallels that, okay, industrial revolution happened and all that happened, but frankly I don't think all those previous sort of technology revolutions had impact.

Speaker 2:

Right, you can, yeah, yeah so I would say, like you know, because what we do, I'm seeing very, too, very interesting uh, I would say too interesting trends happening at the same time. The first is that our company entirely replaced human dev agency. So on one end, we're seeing that job function, function by function, is being replaced by AI, including my job. So that is happening right now and they're doing a better job. Ai can work 24-7. They speak 30-plus languages, we serve 100-plus countries and they can always respond. Better job AI can work 24-7. They speak 30-plus languages, we serve 100-plus countries and they can always respond to you. There's no delay. There's no way we can compete against that.

Speaker 3:

So when it?

Speaker 2:

comes to job function, I think those functions probably will be replaced, I mean, unless the product is so specialized in legal compliance, knowledge or healthcare, where very, very difficult, ai hasn't learned how to do, which I think there will be fewer and fewer aspects like that right, because the AI model is getting better and better. So that just means more and more of those jobs will be just replaced, for sure, and so that's what we see. But obviously, when it comes to signing contract, people still want me to sign the contract, even though they're paying Astra. They want me to sign the contract. They don't trust her signature. They still sometimes prefer me marketing the company. I couldn't realize in the past week. You know, me and Astra post the same thing. I get more views, you know, because people still like to see a human, but I don't know, will that change? You know, right now it's like I don't think I'm more effective at marketing.

Speaker 1:

I'm more than happy to do a part two of this podcast with Astra like to be honest we do have a bit.

Speaker 2:

We can give you a link. You can video call to interview her. Actually, yeah. But yeah, what I'm trying to say is I think that, even though we see the same thing, I I get more views. So maybe that's not because I'm more capable, just means people like me more, just because I'm human, you know, uh, but will that continue? Because I mean, frankly, she is more fashionable than me. She has many different outfits, she can change. You know what I mean. So I don't know.

Speaker 2:

So that's on one hand. The other hand is that we're empowering a lot of people to build companies. Right now they're developing ideas. Right, it takes them nine minutes from idea to launch. So a person is taking a shower like what, if I can do this, maybe I can be the next Zuckerberg. You know, sometimes before they would just write it down. Now they can make it happen. It's nine minutes from Shower to Zuckerberg Nine minutes.

Speaker 2:

So we are seeing a lot of people doing that, including people that are not technical, they don't know code, I mean, they're just random bakery owners, but they can now compete against engineers on ideas. So that's number one. Number two we are seeing some engineers are very smart. They are making 100 ideas at the same time using our tool, because a hundred ideas at the same time using our tool, because, why not? Astro can work, you know, a million times at the same time. So they are saying here are my 100 ideas. Or even I have a general theme gpt, help me figure out 100 ideas. Okay, gpt, help you figure out 100 ideas. Now, copy paste 100 ideas in parallel and make hay bars, uh, develop all of them and then now you can you know, you can do like run seo for each of the ideas to see which one clicks, and then you double down on those.

Speaker 2:

So I think the entire innovation process has changed. It now became a numbers game too, right? So that means more people without technical background what was the general theme can use AI to directly compete against tech companies. So there are also more opportunities as well. So I'm seeing both. You know, I'm seeing a lot of job being replaced, but at the same time, my company if you ask, does my company? Do we have sin? We replace a lot of people's jobs. Maybe we're sinful in that, but we also create a lot of new jobs. So I wish I could create more jobs than I replace jobs.

Speaker 1:

Absolutely, that's a good sort of segue, right, that, on one hand, the price to enter innovation right, or price to start a startup, has sort of gone down. Right, you have products like HeyBoss for, let's say, non-techies to quickly iterate. You know, put an MVP together and test it out. On the more sort of advanced side, you probably have a cursor which, let's say, can help two like 10x engineers to put together something which probably would have taken 20x, 20, 10x engineers to kind of write. Right, that is one. And second, is that, even with all this right? Uh, first of all the price and has gone down. Let's say, you run these several experiments and something sort of takes off or has very strong indicators of taking off. It has, in a way, become transient, right, because, again coming to the first thing, it is easier to copy for anybody, right, after a point. So, as founders, kind of you know, as well as you know as investors, right, how should we think about, you know, innovation, where does it lie? And also, you know, in the realm or in the world where the world, where to innovate on new things or to bring about new ideas has become such a low cost, how do we really think about creating a differentiation. Where does that come from?

Speaker 1:

And we have been talking a lot about okay, there are several companies that are scaling to 100-200 million ARRs. That's good, but I fundamentally feel that a large chunk of that growth is sort of what I call experimental ARR, where everybody's trying stuff right. What is yet to be seen is that, okay, do they get that renewal after the annual contract is over, and stuff like that. And I'm pretty sure if not all, a good chunk of them will sort of not be able to hold on to that kind of growth, right. So that has been this pushing pull for us as investors as well as founders who are kind of thinking about it. So what would be your advice, recommendation or mental model? You would say how do you look for opportunities, how do you continue building or sustaining differentiation and how do you scale or how do you go from there?

Speaker 2:

Yeah, I mean, I think there's a few things. One is that even for us you know we're seeing many different use cases. Right, we have to choose which use case to focus on. I think there are intrinsic difference in use cases. Sometimes it's like one of those things that people need a lot of trust to start. Once they choose you, they don't turn. There are other things where they're like to experiment. They also, like you know, very likely to turn away. So I think there are fundamental differences in ideas when it comes to initial barrier of entry whether it's trust-based or product-based and then basically the transition cost of like if I were to not use you, how costly would it be? Whether it's trust or compliance or product. Right, I do think there's going to be a lot of those like consumer app's, like trends in. There's going to be one thing that's popular. Then, okay, it's no longer popular anymore. There's going to be some tool that say, oh, I can do this, and then everybody's going to copy For sure. There's going to be a lot of churn in those type of use cases.

Speaker 2:

I do think, for a lot of the maybe more large enterprise wars, like trust-based thing, where user really needs to build affinity with you. Those are still like where you have special distribution advantage, where you have special trust advantage, where you have special taste advantage. Those sort of things, I think, are still going to win. So that's number one. Number two is that I think it's also important for AI when it comes to build tech company, even for us. Our AI are constantly learning new things. Not only do we teach them new things, but also they can learn themselves right. So it's more like having the infrastructure where AI can proactively learn and improve based on user feedback, and now they can do all kinds of restarts. They can add to all MCP. How do you make sure your AI is always learning faster than other people's AI?

Speaker 2:

I think that's also really, really important, but it requires some technical skills, for sure, and also like technical infrastructure, but also, I think, proactively designed to make sure your AI can improve faster.

Speaker 1:

You know, makes sense, makes sense so what I'm taking away also, which you kind of did not say it out loud, but consumer product sort of will become transient by design. I mean, it has sort of been happening over decades, if you see right.

Speaker 2:

I mean, if you have a network effect, maybe that's different, right, because now they're sure they could.

Speaker 1:

Yeah, for example, I do think that something like a facebook or an amazon would still maintain their scheme, because it's not just product at the end of the day. Right, there is, and that's where, you know, maybe a strong offline aspect of the business could create that differentiation. Right, amazon is fundamentally a logistics company with all this network design. Of course, tech is a big enabler to that, right, uh. But what you also sort of said is that on the enterprise side, you can find more stickiness and, you know, sort of differentiation, because that's where the whole trust thing and, okay, enterprise need to know that, okay, you're not going to kind of, you know, leave them tomorrow and there's going to be some kind of sustainability there in in terms of tg, right, in fact, that that's what our thesis has been.

Speaker 1:

We obviously know that several AI consumer companies will come about Very difficult to underwrite. Which ones are those going to be? Versus having an enterprise getting served within AI companies, you can underwrite it better from a risk perspective. Basically, right, because, uh, there is a you know, whole trust angle and if you get in once in an enterprise, unless, like, you're screwing up, you're not likely to be replaced, right, even, let's say they get something 20 cheaper or something like that, right? So that has been, uh uh, our sort of thinking so far. Let's see how that evolves. But look, this has been, I mean-.

Speaker 2:

I mean, I do think for consumer, though, there is one nuanced thing which is called taste, and that is actually, I mean, even if you have the perfect AI, there are still a lot of points. I think that someone needs to hold the line for like what does the case.

Speaker 2:

Look like. You know, if you think about Instagram, they're not the first company with filters. Taste, right. So I think that is probably going to be very, very important. Makes sense. Yeah, even product manager, I think. When it comes to product manager, it's like, okay, we used to write PRD, but now, chachi B, you can write PRD, right. So it's like, okay, maybe now it's a competition to see which product manager has the best taste, and it's a little fuzzy, right To define what is a good taste. Yeah, yeah.

Speaker 2:

It's really really fuzzy fuzziness of the differentiation. I think that's that should be the motto, right but I do think there are ways that you can evaluate taste. Good or bad it's hard to describe it. I think you can evaluate it, so you know me, so I do think that is important, yeah makes sense.

Speaker 1:

This has been, this has been fantastic. Thanks a lot. Really appreciate you taking the time. Thank you for having. It was great fun and we are hoping for a part two of this with Astra.

Speaker 3:

Right, okay, sounds great want to know when new episodes are available, just search for Prime Venture Partners Podcast in Apple Podcast, spotify, castbox or however you get your podcasts, then hit subscribe and if you have enjoyed the show, we would be really grateful if you leave us a review on Apple Podcast. To read the full transcript, find the link in the show notes.