From Startup to Exit

Gen AI Series: Using Gen AI in a startup environment, A conversation with Jay Bartot of Madrona Venture Labs

TiE Seattle Season 1 Episode 13

Send a text

On our one year anniversary, we are releasing this terrific episode with Jay Bartot, partner at Madrona Venture Labs (MVL). Jay talks about the model that MVL uses to work with startup founders and vet startup ideas.

Like many technology entrepreneurs, over the last 18 months, Jay has been deeply immersed in Generative AI technology, building experimental products, and testing the capabilities as well as limitations of the new AI frontier. Having seen and worked with slowly evolving AI and machine learning technologies over 25 years, his early experimentation with LLMs (large language models) quickly led to the realization and excitement that difficult problems he grappled with for many years (e.g., robust natural language understanding) are now virtually solved. The essence of his enthusiasm for the advances in capabilities Generative AI technology brings to entrepreneurs and startups was captured when he was recently overheard saying, “I mean, holy shit - these things are fucking awesome!”

Jay Bartot joined MVL in 2016. At MVL, Jay leverages his extensive experience in software engineering and machine learning to develop innovative, AI-driven software solutions. His expertise extends beyond technical execution, encompassing a spectrum of strategic elements of company creation and building.

Journey to MVL

Jay's immersion into computing began in the late '80s when he was swept up in the personal computer revolution, learning to compose and play music on his Apple Macintosh in college. This ultimately sparked his interest in software development, signal processing, and computer graphics. He quickly discovered the joy and power of building and distributing his own software to his friends and family, not to mention being paid to do what he loved. During the internet boom of the mid-90s, Jay joined his first startup, where he was introduced to the value of data, machine learning, and venture capital. Over the next 15 years, Jay co-founded several data-centric machine-learning startups in various verticals, such as e-commerce, online advertising, travel, medical informatics, and consumer video applications. These startups were later acquired by major companies such as Nielsen/NetRatings, Microsoft, Alliance Health Networks, and Hulu.

Brought to you by TiE Seattle
Hosts: Shirish Nadkarni and Gowri Shankar
Producers: Minee Verma and Eesha Jain
YouTube Channel: https://www.youtube.com/@fromstartuptoexitpodcast

SPEAKER_00:

Welcome to the Startup to Exit podcast, where we'll bring you world-class entrepreneurs and VCs to share their heart-written success stories and secrets. This podcast has been brought to you by TIE Seattle. Thai is a global nonprofit that focuses on foster entrepreneurship. TIE Seattle offers a range of programs including the GoPro Startup Creation Weekend, the Thai Entrepreneur Institute, and the Thai Seattle Intel Network. We encourage you to become a Thai member so you can gain access to these great podcasts. To become a member, please visit www.seattle.tai.org.

SPEAKER_03:

Hi everybody. Welcome to another episode of our podcast from Startup Exit. My name is Gary Shankar. I'm on the board of Thai Seattle, which produces this podcast. My co-host, Shrish Natkarni, who also serves on the board, uh, is a serial author and uh has two books already published, and you can get them wherever books are sold. The first book uh called From Startup to Exit, from uh, and we shamelessly borrowed that title to name our podcast. And uh the second one, Winner Takes All, is all about marketplaces. Uh, we both have been doing it for a while uh and have talked to great many folks. And today's guest is yet another Seattle uh gem that we're gonna talk to, and I think all of you will enjoy uh learning his views about uh startups and uh how to evaluate uh your ideas. Um but hopefully all of you will spread the word, subscribe, and uh follow us uh wherever podcasts uh are published. We are available on all platforms. With that, let me hand it over to Sharish to kick off the conversation. Shirish?

SPEAKER_04:

Yeah, thank you, Gauri. So very pleased to welcome Jay Bato to our uh podcast. Uh Jay um is a really good friend um and um partner at Madrona Venture Labs, which is uh one of the leading incubators here in Seattle. Uh he's a serial entrepreneur, started several uh successful companies, uh, and is now spending a lot of time at uh MVL uh playing around with LLMs, uh trying out different ideas for startups. Um and today's uh discussion will focus uh first on uh MBL and uh their model, uh how entrepreneurs uh can approach them and work with them to do their startup. And then uh we'd like to learn more about uh um Jay's experience with uh LLMs and what he would recommend to startup founders. So welcome, Jay.

SPEAKER_01:

Thanks, guys. Uh great to be here and talking with you.

SPEAKER_04:

Great. So um before we get started uh talking about MVL, uh maybe Jay, if you can talk a little bit more about your background and how you got to MVL.

SPEAKER_01:

Yeah, um happy to. So um I uh started out my professional career as a software engineer and um uh moved out from the Midwest to the Pacific Northwest and and um pretty pretty soon after into my career, I sort of fell into um uh working with uh startups, doing startups. Um and um, you know, this is around the time of uh mid-90s and the um you know the internet taking off. Um and um you know, found myself moving around from company to company a bit, but but gravitating mostly towards uh smaller companies. Um in uh 1996, I think, late 1996, I joined a startup called Netbot, which was a spin-out from the University of Washington Computer Science Department. Uh we were commercializing some technology that some faculty were doing there. And uh it was the first venture-backed uh startup uh that that I had worked for. And um there I got introduced to um you know startups, the internet, um, and ultimately machine learning, um, which which I really kind of um you know caught my eye and I I kind of uh uh got bitten by the machine learning bug. You know, I'm not I'm not even sure we called it machine learning back then, but um I think we were probably calling it data mining or uh predictive analytics or what have you. But uh, you know, I really I really could see the obvious um uh you know usage usages for for this kind of technology. And so I I really kind of dove in. And um throughout the really the rest of my career, um I've done a number of startups that I've co-founded uh even before joining MBL that were venture-backed and were you know ultimately very data-centric and machine learning-centric uh uh startups and products. Um so you know, I like to joke these days that you know I thought machine learning was really big 20 years ago. Um I was really must have been in my own bubble and kind of drinking my own Kool-Aid, but um, it's kind of interesting how it's sort of bubbling over the top now. Um and I'm delighted by all the advancements that have that have been happening. But um, you know, I guess I didn't intend to ride this way, but I've been riding it uh all along and and really enjoying it. Great. Thanks for the introduction.

SPEAKER_04:

Um so let's talk about MBL. Um, you know, as we um mentioned, it's one of the leading incubators here in Seattle. I think originally they started uh incubating their own ideas, but now they work with uh external startup founders as well. Um so perhaps uh Jay, you can talk about your model. How do you work with startup founders? What kind of startups are you looking for?

SPEAKER_01:

Sure. Um so I joined uh Madrona Labs in 2016. Um I joined uh my partner Mike Fridgen, who I had worked with at a previous startup Faircast. Um and so he and I were old friends, um, both entrepreneurs. And um, you know, over time we um you know slowly but surely uh built up uh our model and developed our model and really as a studio had defined product market fit. Um, you know, were we offering the right services um for the uh right amount of equity for entrepreneurs and their needs uh here in the Pacific Northwest and in Seattle? So, you know, that took some time and iteration. Um but you know, over time we we developed a model um that has a little bit of variance to it. Um you know, we have kind of a spectrum of models that we work with uh with entrepreneurs depending on what they need. So we try to meet on entrepreneurs where they are. Um we meet some solo entrepreneurs who bring us an idea, um, and they have good founder market fit for that idea. And so we help them with all aspects of starting their businesses, um figuring out the technology angle and the product angle. Um, very importantly for our model, doing our homework on uh customer and uh market research. Um, you know, uh there's one thing I've seen uh throughout my career, I've made these mistakes, um, I've learned from these mistakes. Somehow, sometimes entrepreneurs make mistakes again. Um but you know, it's really important to do your homework and figure out if there's a there for your idea. Um and do that before you. Um and uh you know, you don't want to find out later after you've raised some money and hired a team and so forth that um the market's not big enough or the customer problem wasn't significant enough or what have you. So we really focus on those kinds of things early on uh with a founder or sometimes with our own ideas. Um ultimately, sometimes the ideas are ours, sometimes the ideas are uh those of the entrepreneurs we work with, but at the end of the day, we need an entrepreneur uh with strong founder market fit, um, a great team, um, a thesis, uh, and and um all of that helping them get to um uh a venture round, a seed venture round of financing. So they can spin out of the nest and flap their wings wildly and uh uh and go off on their own and and do their thing and hopefully become great companies. Got it.

SPEAKER_04:

So um, you know, accelerators typically will take six, seven percent equity and then they'll also add a note on top of that, like Y combinator or Textiles. Uh, what is your model? How much funding do you provide, if any, how much equity do you take? Um, if we can talk about that.

SPEAKER_01:

Yeah, I mean it's really a range. So I talked mostly about um what I would say, what I would consider our core model, where you know, we we have a team of um uh eight or so uh uh uh operators and experienced entrepreneurs, you know, who who work at the lab and and work on uh we all work on ideas together. The core model is really where we provide the most services, um, you know, the most uh sweat, sweat work, if you will. Um you know, where someone like myself, who's a technologist, is doing not only technology vision, but also rolling up my sleeves and writing code and and um helping to develop the MVP. Um we have folks on the business side that help with market research and customer discovery and business model and business plan, uh, all of those things. And so that's kind of the hands-on model where we typically take the most equity um for that, um, which can be, you know, in the uh maybe slightly north of 20% range. We also occasionally account uh encounter an entrepreneur who's a bit farther along. Um they have an idea. Um, sometimes they have a proof of concept or um some nascent um uh version of a product or something like that. Um and they need uh less of our help, less of our help, more of our guidance. Um in those cases, you know, we might take as little as you know five or six percent. Um we do mostly the core model stuff, um, but you know, depending on um the time period we're in and and uh the fund that we're in, which is our aligned with Madrona Venture Group's funds, um, you know, we do uh we do a uh uh we work with entrepreneurs across that spectrum. We also always put some kind of money in. Um, you know, we refer to this as first money in, pre-seed, if you will. Um typically 250k, sometimes a little bit more, um, where we get preferred equity uh for that um on standard terms. Um and you know, it's really important, I think. It's an important value for the entrepreneur. Um and we learned this over time that you know you gotta provide expertise and um sweat equity services, but you know, you also need to provide a little cash so that um the entrepreneur can start hiring a team and and um and and get their company going.

SPEAKER_04:

And in that case, uh I assume you negotiate the valuation that you invest the funds out of.

SPEAKER_02:

Yeah.

SPEAKER_01:

Right.

SPEAKER_04:

Yeah.

SPEAKER_01:

You know, you know, another part of our due diligence is um uh you know uh feeling out the investor landscape for a particular founder and idea. Um and so, you know, we're we're you know, uh obviously have a very close relationship with Madrona Venture Group. Um, but we also have a uh pretty extended uh network of investors um from the Pacific Northwest and the Bay Area and uh the East Coast. And so depending on what the idea is and what the space uh the idea resides in, you know, we start talking to investors early and start feeling out, you know, what do you think of this space? What do you think of this idea? What do you think of this entrepreneur? Um so again, don't want any surprises later on. Um we really want to warm things up along a number of dimensions so that when the idea is ready, then you know it's it's uh everyone gets aligned and it's all ready to go.

SPEAKER_04:

So typically um uh some you know uh as a result of that initial kind of um you know feedback from Madrona, then Madrona gets uh I assume interested, and then they will uh do a seed round of some kind uh and invest along with you in the company, along with other investors. You know, that's typically the best outcome.

SPEAKER_01:

That's that's really what we strive for. Um that's the best outcome. Madrona uh doesn't always uh invest in our deals. Um you know, sometimes our deals are are um outside of their thesis, or for whatever reason we couldn't find alignment, uh, in which case, you know, we'll look for another lead and and other uh other investors to come in on the on the um on the syndicate.

SPEAKER_04:

Right, got it. Okay. Um so um one of the things that you mentioned that you do is the market validation, and uh I understand is it like a three-step process that you follow. Uh can you talk talk a little bit more about that?

SPEAKER_01:

What exactly that means well, you know, we we do we do we have learned over time to do things um in in phases uh with milestones. Um and so you know we you know typically what we really love is when uh a founder comes to us with with strong founder market fit, meaning that they they they know the problem that they're trying to solve. They lived it, they worked in an industry um where they experienced the problem, but also importantly, they experience the problem with other people who are still in the industry that um they can leverage for um design partners and pilot customers and and uh you know things like that. Um and so um so hopefully they they come to us with with evidence around um a problem and a need. Um but um you know we want to validate as well. Um, you know, again, you know, we're we're entrepreneurs, um, we know the confirmation bias that entrepreneurs can experience. Um and uh we know how hard it is to really listen carefully when you're getting feedback from a potential future customer. Um and so I can recall some meetings I was in with entrepreneurs where we were doing a customer call, um, and the entrepreneur pitched the idea to the customer. Um, we got some feedback, and then after the uh uh meeting was done, um I re-synced with the entrepreneur and they said, wasn't that great? They loved this. And I was like, were we in the same meeting? Um, so so it's it's hard. You know, when you're passionate and you're a builder and you're creative, you know, you want to hear the things you want to hear. Um, we can really help making sure that there's some uh objectivity uh the the the point of all this is that you know we want to um go through these stages of you know sometimes it's 30 days, sometimes it's 60 days, 90 days, but milestones to go through of you know, are we hitting? Uh are we learning what we need to learn? Are we getting the evidence we need to get? Um, and you know, do we ultimately see a path to seed funding? And you know, for some ideas we don't get there. You know, we do the work, um, and for whatever reason, um, it could be that the problem isn't big enough, it could be that um uh you know the market is not big enough. Um it could be that the entrepreneur we think is um, you know, for whatever reason uh maybe not the right person uh to move forward with on a particular idea. And so, you know, we we want to be very careful about you know making sure that we have the evidence when we get to these milestones and and cut bait if we need to. Um you know, again, uh I alluded to this earlier, but you know, the sooner you find out um that something's not gonna not gonna pan out, um, the sooner you you know everybody can get on to focusing on other things and not finding out later that you know something's inherently flawed.

SPEAKER_04:

Do you when you do the market research uh validation? Uh do you need to see a certain percentage of customers coming back and saying, hey, this is great. Uh we'd love to be a uh beta customer of yours. Uh like we need to see you know seven out of ten uh enterprises saying, you know, that we want to get onto a beta. Do you want to see that kind of feedback? Um ideally.

SPEAKER_01:

I mean, you know, it depends on we do mostly enterprise, um, you know, by the way. Um enterprise AI is really Madrona's focus and and our focus sometimes that drifts off a little bit into you know B2B to C or some variant of that. But um ideally we're finding such strong signal that um there are folks we're talking to who are uh uh can agree to be design partners early on. Um that's really the term kind of we're using as of late to you know have a company whose problem is um you know uh strong enough, bit big enough, and our solution solves that they're will that company is willing to at least dedicate their time. Perhaps they're not paying yet. If they're paying, that's great. Um, but at least dedicate their time to you know help develop the product uh in a way that that um you know that that uh ultimately solves the problem. I I've learned that um you can sometimes um establish that there is indeed a big pain and a big problem, but that's not the bottom of the bucket. Um there's other things that can get into your way, um, even if there is a big pain, um, you know, that can sometimes be um you know uh it can be challenging to realize until you're you're out there with a product in the market. But you know, again, when you you know we're learning a lot of these lessons, obviously, across a lot of companies. And you know, this is the kind of knowledge we can share with our entrepreneurs so so that you know they don't have to, you know, uh go hopefully we we can keep them from stumbling.

SPEAKER_04:

Now, do you also help uh the um founders with uh recruiting creating the digital team? Oh yeah. Is that one of the functions you provide?

SPEAKER_01:

Absolutely. Um we we have a we have a very strong um recruiting team um and a great uh head of talent, Sam Dor. Um and you know again, this is something that we try to stack the deck early with. We think we have an idea, we think we have a founder, we think we have a space, um, or even sometimes before we have a founder, if it's our idea, let's spec out what an ideal uh founder looks like. Or if a founder comes to us, um, typically on the business side, sometimes on the technical side, but on the business side, sooner rather than later we start thinking about who would be good for this idea. Um, and how do we track them down and how do we get them interested early? So talent is a huge part of this.

SPEAKER_04:

Yeah, makes sense. Makes sense. Now uh let's talk about uh um one of the companies that you worked on, uh Zeitworks. Um you actually uh took time off to become the CEO of that company. Um you were excited about the opportunity, but ultimately you had to find a soft landing for that company. It didn't quite you know work out as you had expected. Um can you talk about some of the lessons? Clearly, you'd done market validation, otherwise you would not have jumped on board as a CEO and all that. So what went wrong? Did you miss some signals or uh were there some other problems?

SPEAKER_01:

Definitely Yeah, I think definitely missed some signals. So um, you know, interestingly about Zeitworks was it was a it was a project I worked on inside of Madrona Labs uh that I did feel strongly about, but the company actually spun out of the lab um and raised money from Madrona Venture Group and and a few others, um, and then operated independently for a year um before running into some problems, some technology challenges in particular. Um it was a hard problem, um, and the problem uh required um certainly uh RD, um which I've done a lot of in my career, where you know, the first year or so of the startup is you know research and development, but also you know, starting to put together um a viable product. And this is the kind of classic like uh uh build and fly the plane at the same time. Um so a challenging problem, and so I um I was asked to come in and uh help with the technology problem, which was appearing to the current team as possibly being uh intractable. Um I didn't think it was intractable, I thought it was doable. Um I had thought that all along when working on it at the lab. Um ultimately I stepped in as CEO and we um uh put a little team together and started working on the problem, and we ultimately came uh found a technology solution. Um so it wasn't indeed a technology problem. Out there was a much harder go-to-market problem. Um, enterprise product, so an enterprise sale. Um, you know, interestingly, um, what I found was that um, you know, this was really a big dose of yes, there's the problem, but there's more steps to go through to get adoption and a sale. And so in the case of Zeitworks, its product was um uh invasive in the sense that you deployed into a bespoke secure environment. So, you know, we had desktop agents that would run on uh machines um of people doing um uh you know business process work. And um there was a number of points of friction there. As soon as we would get introduced to an operations person that was eager for the data to be able to optimize their team's performance or uh come in with an RPA solution, it wasn't long before I was, you know, we were in front of the uh head of IT and CISO. Um and being a young startup, an unproven startup, uh, we hit a lot of friction uh there. And again, it's not that it's insurmountable friction, but it's friction that slows the process down, sometimes dramatically. And of course, a startup, um, you know, uh time is everything. You know, you only have a limited burn rate, and you know, you gotta get you gotta get uh through certain hoops um in order to get on to the next stage. So the the the um deceleration of the sales cycle, you know, was was really difficult. Um you know, the other problem we hit that was that was more um formidable than I thought it would be was the um concept of surveillance. Um you know, I think as a society we're kind of in a weird place in terms of how we view technology that observes how we work and live. Um we're all on the social networks that are out there. We know that you know our data and behaviors are being recorded and um you know used for all kinds of different purposes, hopefully mostly to help us, you know, find the goods and services we need. But um, I think that we're at a point where there's a lot of sensitivity around people feeling like their management is surveilling them. Um and um and so that was a pretty big point of friction as well. Um you know, for what it's worth, I I do believe to this day that employee behavior data um is really kind of the last mile of um you know really powerful data that can help organizations uh you know optimize their um their their operations. But um, you know, you have to be really careful about it. Um and you know, how you deliver your product and how you market your product and so forth has to be you know very sensitive. Um you know, we just saw with Microsoft, for example, their um their recall. Recall was their product.

SPEAKER_02:

Yeah, yeah, yeah.

SPEAKER_01:

Um you know, that because it, you know, again, the hard technology problems, the the uh uh behavior, uh employee behavior data on computers, it's very uh voluminous and noisy data. Um finding patterns in it is challenging, but with the advances in AI, you know, we can we kind of find those patterns now. Um but again, that's not even even though that can be a formidable challenge, that's ultimately not the problem.

SPEAKER_02:

Right, right, yeah.

SPEAKER_04:

Yeah, I just wanted to understand, yeah, uh even despite the market invalidation, there's you know, still things that you don't fully understand until you actually go and start the business and you know start selling to customers and so forth. So that's good to understand.

SPEAKER_01:

Yeah. Well, you know, we're um as entrepreneurs, we're all influenced deeply by um, you know, our experiences. And I can tell you that you know, I'm I'm skittish around enterprise ideas that um you know are invasive products that um you deploy into bespoke secure environments. There's lots of problems in those environments, so it makes sense that entrepreneurs you know who are familiar with those problems or are trying to build, but you know, the go-to-market can be really hard and difficult. Um yeah, and it's and it's um yeah, it could be just very, very, very challenging.

SPEAKER_04:

Uh, one final question about MVL before we go on to your uh experience with LLMs. Um is uh you know uh if there's a founder who has uh you know found a market fit or has an idea, etc., how should they uh best approach uh MVL? Is it through a uh through some common uh connection or can they apply it some on the on the website? Or how should they best approach it?

SPEAKER_01:

Sure, you know yeah, we have a number of ways to to um to interact uh to interact with us. We one of the one of the um great programs we have is called Leap. Um and um you know we have leap events, uh really they're just um uh networking events um uh one uh every other month. Um and you know, typically you know, 50, 60, 70 people show up. Um uh and you know, we all go to them and and we all love working with and talking with entrepreneurs. So that's a great way to not only talk with us, but also to talk with other entrepreneurs on our network. Um plus we have a variety of programs that we offer on the website where you can um you know submit a forum, and if you have an idea or what have you, um, you know, uh that's a good way too. Um I I like going to uh local uh AI events, um AI Tinkerers, uh, you know, Joe Heitzberg's um uh event uh month monthly monthly AI meetup event is is uh I go to um always faithfully, and um I meet a lot of people there who have ideas brewing in their heads. And so um I like to think that we're very approachable. So I'm always telling people, you know, if you want to grab a coffee, just reach out. Um, you know, being an entrepreneur myself, I know how uh vulnerable you know people can feel when they have an idea. It's their idea, it's their creativity, and they want to share it with someone, but also get constructive feedback. And um I think that's really important.

SPEAKER_04:

Excellent. Excellent. Um thank you, Jay. I'll uh talk about uh MVL. Uh again, I'd encourage any entrepreneur thinking about uh doing a startup to really consider uh an incubator like uh Metro Dominicell Apps. So with that, let me turn it over to Gary and explore kind of the LLM experience that Jay has had at NBR.

SPEAKER_03:

Great. Jay, that was a fascinating story on Zeitworks. Uh, even though you had all this experience a long tenure as serial entrepreneur, still um when you actually roll into um into go to market, there's always the nuances that uh that change the course of a company. Uh most of the time I find entrepreneurs having a tough time accepting that there's the headwinds are too much for them to stay on. And I think uh probably your experience helped that team a lot more than they might have realized, actually.

SPEAKER_01:

Uh yeah, well I hope I hope so. It's always, you know, of course, you know, hard and painful when you know things things don't always go as we hope and plan. Um we also have to be uh responsive, uh responsible um to our investors um and do the right thing for them and and um and and to the employees, of course, to hopefully uh you know land um uh in in a way that's best for them as well. So, you know, it's hard, but you know, that's what they say about that's what they say about startups, right? They're hard. And you know, any anybody who's you know been through this um as much as I have is gonna have some wins and gonna have some losses. And and both both build build um experience and character.

SPEAKER_03:

Great. So let's just kind of dive first into the uh LLMs. Um given your history with machine learning and and where your journey started in Seattle, um is the are the LLMs what they're shaping out to be, or were you not that surprised, or you're extremely surprised? Is it just the compute came along and caught up with the with the LLMs? What what's what's your initial reaction?

SPEAKER_01:

Well, um, you know, it's a little bit hard to say because I followed um you know machine learning and used the state-of-the-art machine learning and all of my startups along the way. Um, especially NLP um technologies. You know, I did a couple of startups that relied heavily on state-of-the-art NLP. Um but you know, I think as we know, starting in what 2012 or so when convolutional neural nets um really started changing, radically changing the uh efficacy of computer vision and computer vision applications, um, it was pretty astonishing to watch. Um what I think is kind of funny is I remember um reading pretty arcane uh statistical and machine learning textbooks um in the early 2000s that I really wasn't educated for to read. I didn't have masters or a PhD in computer science or statistics, but I read stuff pretty aggressively and and um you know wrote some code where I could, and and um there was a one kind of uh machine learning technology I thought was really interesting, these neural networks. Well, these these things look really interesting. Um, you know, I like the fact that they're biologically inspired and you know, it just seems really cool. So I you know asked some people who were much more uh you know educated in this these kinds of kind of math and and um and technologies and said, well, what do you think of these neural nets? Ah, those things don't work. Um they're toys. Um stay away from them. You know, at the time support vector machines and and um naive space, naive Bayes classifiers and things were you know more more uh popular at the time. So it it would really did strike me when you know the compute did start to be there, the data did start to be there, um, where these you know um uh neural networks popped up. And um, you know, I know there's lots of um experienced data scientists out there who would still say, you know, random forests or um linear regression or logistic regression, especially in in business use cases where explainability is important, they're still very viable um techniques uh and methodologies, but you know, neural networks have really become the jolly green giant um you know in this in this field. And and um and so when I watched computer vision you know really get more and more powerful, myself and many other others were wondering, or like when will natural language processing have its its deep learning moments? And for a few years there, um maybe 2016, 17, maybe into 18 a little bit, it was kind of like uh we you know, we've got these word embeddings, um, which were an improvement over you know previous um NLP techniques, but suddenly there was an inflection curve there too, and we started seeing these bigger and bigger uh models that turned into these what we know as large language models today. Um and um I just can't tell tell you how incredibly delighted I am with the progress both on the on the language side and the image side, um, you know, even in the last you know two or three years. Um when I mentioned I returned to Madrona Labs um in the late spring of 2023 and really started you know playing with these things in a in a in a big way and experimenting with them, I was having my these what I was calling falling out of my chair moments. Um where I would try something and just be like, holy cow, like wow, that's unbelievable. Um uh especially on natural language understanding. Um, you know, that was always the hard part, right? I mean, um unstructured uh data, especially with language, um, was always really difficult. Um I built startups around this. Um and we banged our head against these problems. Um they were kind of worked, not really. Um certainly couldn't parse and make sense of, you know, just human dialogue and and those kinds of things was very challenging there. And so to suddenly find these solved problems, um, you know, these things were were just you know just uh a solution and an API call you could make was really astonishing to me and very very exciting.

SPEAKER_03:

So let's unpack a few things uh you uh of the uh many things you said there. So you played with you played, I assume, with all the LLMs at this point, right? Um a lot of them. A lot of them, okay. So there's uh at the moment at least, the uh they're all in some fashion managed, controlled, uh, operated by very big corporations, right? Um so resources is not the constraint any of them have. It seems like there's uh there's a lead that OpenAI Chat GPT enjoys perceptionally over the others. Does your experience um match with that perception, or is it more a perception that they have a do they really have an early more advantage uh in a in a resource uh plenty startup uh landscape for LLMs?

SPEAKER_01:

Well, I think that they have an advantage now, but it appears to be shrinking. Um and um, you know, we see regularly open source models um being released, um not just new models, but of course success successive versions of existing open source models like Lama 3 and and uh Mixdrol and things like that. Um uh and you know they're getting better and better, and they're getting better and better quickly. Um you know, we can see that um, you know, the the the big um tech companies um you know have they're building giant models with lots and lots of data. I think the open source community is being more scrappy and figuring out how to take um better data, um, smaller amounts of better data to train these so-called uh small language models um that are maybe good at um specific tasks and less generalizable, like something like ChatGPT-4. So I I I think we're seeing the space start to partition. Um and depending on what you're building, um, what kind of application you're building in particular, you know, there's there's an increasing amount to um choose from. And I think we'll we'll start to see a lot of specialization. Um and I was rooting for the open source community uh, you know, when all this got going. I've been an open source guy for a long time. I was on Linux in the 90s and built a lot of my startups on Java and and um Linux and so forth, and so sure enough, um you know it's it's been amazing. Um if anything, the problem now is that there's so much technology, so many models being released that not one person could never hope to you know keep track of it all. Um last time I checked a few months ago, there were six north of 600,000 models on Hugging Face. Um so it's really an avalanche. Um and I can tell you that um, you know, trying to keep up with the technology and the technology advances um has been really fun, but you know, sometimes it's sometimes it's overwhelming.

SPEAKER_03:

So is there a relationship with cost? I mean, do you see that the cost is dramatically going down as number of models increases and the capability is also increasing, especially with open source getting a big boost with Llama?

SPEAKER_01:

Yeah, I I do think cost is is important, um, but I don't think it's the biggest factor right now, just because I think an overarching concept around Gen AI is uh literacy and adoption. And I think that even though many of us in the tech sphere are kind of in our own bubble around this, and we think the whole world is you know thinking about Gen AI and using Gen AI for everything, I think actually we probably have a long way to go in literacy and adoption, both on the consumer side and within enterprises. So I don't I don't think it's so much cost. I think that um inference latency is more important or it has been for me in the applications I'm building. Um the faster the models are, the more often I can use them. So, you know, for example, if I have a real-time chatbot or a multi-agent uh chatbot, which is a particular project I've been working on the last uh nine months or so, um the architecture um is really dependent on uh the latency of the models. Um, you know, one of the things we know everybody's kind of dealing with is is um unpredictable output from a model. You know, I think one of the most challenging things about building production grade apps right now is that the model is completely delightful 85% of the time, that last 15% is really hard and frustrating. It's I call it cat herding. Um and um and so if the models are fast enough, then there's times when you can have another model judge the output of uh a first model and and help you know beat down that kind of last 10 or 15 percent. But that can only happen if the models are fast enough. Um I'll give you a quick example here. Um we're I mentioned this multi-agent uh application I've been working on. It's in the travel space, and it's basically um a virtual travel agency. So I have a flight agent and a hotel agent, I'll have a rental car agent at some point, and a few other um auxiliary agents. But importantly, I have a supervisor agent, a manager that manages these agents. And so in theory, the supervisor should be able to redirect or direct every message that comes in from the user to the appropriate um uh agent. If you want to talk about hotels or you know, flights or bookings or or what have you. The problem is that um that architecture is great, but it was too slow. And so the latency that the supervisor added just kind of made the product you know a lot less friendly and usable. And so we started exploring um other um uh ways to approach the problem and started going down sort of a peer-to-peer um uh uh route where you know each agent would get um the last agent who handled the message would get the next message that wasn't appropriate for them. They they'd uh move it to uh an adjacent agent, and that agent would look at it and and uh and so forth. And so um it worked better, but it was complicated. Um and so I was kind of frustrated and I went on vacation. Um, and while I was on vacation, I got an email from OpenAI saying we've released Chat GPT 4.0. Um and by the way, it's twice as fast and have the cost. Um and so suddenly the supervisor pattern um was viable. Um, and so I think that these improvements, um, particularly around latency now, but cost will come into it down the line as you know, demand for the products and adoption of the product scales, um, you know, that can really affect um how how you build these things and what you could build.

SPEAKER_03:

So two parts to the next question. I'm gonna jump off from what you just said. First part is uh entrepreneurs out there, right, uh, who are coming up with ideas of building uh Gen AI? You we heard a lot of things saying, hey, they're all just rappers around LLMs, and eventually OpenAI would do it. Uh quite possible. Uh it could also be that uh there is still the enterprise problem that you have solved, except that the Gen AI makes it solve it more elegantly. Which of those scenarios you see you hear more as uh as MVL saying, hey, uh I can do this, okay. What's really you're doing versus no, this is a problem, we're gonna solve it. And this is the way we're gonna solve it using uh LLMs.

SPEAKER_01:

Well, you know, I I think we we um from our perspective, we have a we have a lens towards um uh domain insight and domain expertise. Um take the travel idea, for example. Um you know, um, yes, we're you know using LLMs um uh for solving you know problems in travel booking um and searching and and all these things, and they're delightful and cool. Um you know, as I've as I've lectured a lot of uh young entrepreneurs over the years about machine learning, is that you may think you're working for a machine learning startup, but you're actually working for a data. Startup. Because you know, if you have no data, if you have no proprietary data, then you probably don't have much of a moat. If you have no domain expertise, that's often the source of a moat as well. And so I think that, you know, we're we're looking for ideas where domain expertise, access to data, maybe proprietary data or quasi-proprietary data. And also I think user experience is going to become more important. And the combination of those things, I think, are gonna what gonna make certain apps really stand out from other apps. I think entrepreneurs for years have been talking about user experience and how it's how they they employ an iterative um uh procedure and they're always optimizing. And I'm not sure that a lot of that was really true. I think you know, user interface and experience kind of fell by the wayside a lot of the time. I think we can all think of products that we use on a regular basis that are you know kind of crappy and not well designed. And so I think with Gen AI, um the delightfulness, the ease of use um of the experience is gonna be more important than ever. So I look forward to seeing startups and large companies alike spending more time and resources there.

SPEAKER_03:

So let's take this um the travel example and tackle the moat issue that uh Shirish and I have talked a lot about, right? So the first moat is do you really have a moat that everybody could use the same LLM? Just not only you. And then there's the others that is you have to assume that Expedia and say Booking.com are throwing vast amount of resources to defend their own legacy business, whatever that may be. Uh, and then here you're a startup that's uh that are convincing an investor, hey, I have an idea, it works on this LLM, and this would be the best next specs travel agency that that the world will experience. Which of those is defensible and how do you evaluate ideas using that example you gave you gave us?

SPEAKER_01:

Yeah, no, these are these are great questions because we've all been wondering about defensibility and and LLMs. Um, you know, we got this great new capability, but you know, you have it, but so does you know, Sharish. And and um we all we all have it. Um and so um you know it's it's definitely something that you know we're we're all thinking about. You know, again, I I think it's I think it's really around a cocktail of um uh access to data, relationships, um, domain knowledge. So um it hasn't been announced yet, but we have a really great CEO for this travel uh idea who comes from the travel industry. Um I know of a couple of other travel startups that I see out there, and it's young kids who are technologists. Um and uh getting your hands on travel data um is just you know, it's not something that's in a can on the street corner. Um and so having those relationships, knowing those people um uh uh you know is is really important, and I think because of our CEO for this project, you know, really really gives us uh an advantage. Um you know, on the on the whole, like, won't the big tech company just do this? Um certainly I think you know most entrepreneurs are probably most wary of the Microsofts and Googles and Metas and so forth. But as you start to drift out of that space into some of these verticals, um I think startups have the advantage that they've always had, which is being able to move really quickly and be innovative. Um and take a team of six people and build in six months what it would have taken another larger company, much larger company, I won't name any names, years to build and years to deploy. Um you know, I I think the paradigm shift with Gen AI is big enough that a lot of larger companies are going to be wary of disrupting something that they have that works. Um and I think that they'll be keeping an eye on what their their young spry competition is doing. Um, and you know, most likely we we've seen these models before where there'll be consolidation later on, or who know who knows. Or maybe some some little travel startup will will rise up and become the next expedia.

SPEAKER_03:

The the good news, Jay, here is that the this particular technology wave has been democratized so quickly. I mean, essentially every human on the planet has access to it, right? Yeah, in in theory, and could write great poems, uh screenplays, and figure out what flights they should take. All of them have some level of capability, right? So now the question is as you go into these verticals, like you you're going into travel, looks like the same set of principles of a successful startup still apply. You still need a great operator, uh, a great idea that has that can be validated with the market fit, go to market, etc. But is and uh uh the enterprises that also have access to this and could innovate, are they going to get pressure from their own customers saying, hey, either you get on, or I have to find a startup or somebody else that will do it? Because I cannot imagine any boardroom of any size, small to big, uh, is not discussing what is their Gen AI strategy. I bet that every board meeting has a Gen AI slide now saying, hey, what's our Gen AI strategy? So as you see these verticalization of uh of uh LLMs into small language models and therefore applications, you do see that uh the basic principles are not changing but of how a startup should should execute. But you're saying the speed to market is still extremely important, even for open AI or your next travel startup.

SPEAKER_01:

Yeah. No, absolutely. Um, I'm fascinated by where enterprises are right now with regard to Gen AI. Um, because I think you're absolutely right. I know that um boards and um the street, in some cases with public companies, you know, are being they're pressing CEOs saying, what are you doing? What are you doing? And so I you know have been having a lot of conversations, paying attention to the signal as we do customer discovery and research across a number of ideas we're working on. And you know, clearly um uh enterprises uh are at a point now where they need to experiment um with Gen AI, not just in their product lines, but also internally for their operational efficiency. Arguably that's where a lot of companies will get their biggest bang for the buck. Um and I can see a lot of enterprises struggling with that. Some enterprises have an experimental culture, some don't. Um, the ones that don't have an experimental culture, I think, are probably gonna become really vulnerable. Um, the ones that do have an experimental culture are gonna need to apply more resources to uh taking you know some of their folks, some of their engineers, some of their business people, um, and you know, giving them some time and freedom to you know think about how how can we do this more efficiently? Jeez, I saw somebody being able to chat with a PDF document. How can I how can we chat with our PDF documents? Um, you know, how can I have an agent answer my email for me? Um, you know, etc. And this is where you know a lot of the anxiety I think comes in too for folks, um, because they're worried about uh job loss and displacement. Um and you know, this going back to what I was saying earlier about Zeitworks, you know, the the other thing I I that's really kind of come onto my radar that I become sensitive to is organizations um fear of change. Um you know, there's there's a lot of um uh change resistance in organizations. And when you offer a product or a data or insight into something that um is fruitful, especially to the bottom line, it doesn't mean it'll be necessarily adopted because it may mean change. It may mean somebody has to rethink about their organizational operation and they're they're too busy just you know keeping the trains running on time. Um so I'm kind of fascinated by this whole you know change, change thing and change resistance, and because there's a lot of change coming, I think, and I think organizations that really embrace change and embrace experimentation will be you know the big winners.

SPEAKER_03:

Sure. So you early on alluded to hey, 85% of the problem is solved, but still the 15% is tough. Uh if you were to extend that the same percentage mix, right? Seems like human assist will get people to be more productive. But the question then becomes the security, right? Do I really need to pay somebody this much to solve this little of a problem? And it's it's going to be a cultural sort of conversation. And depending on the enterprise or the company, they will have different conversations. Because if you're going to say that, hey, all the domain knowledge is now shifted to a to a model or to a to the next gen AI, it may be just not quite true yet. It could be in the few years, but not quite true yet. The question then is as you really said, is change um re you re are you ready to embrace change? And I think this sort of may point to leadership more than the employees adopting anything. I think that's like, hey, what are we trying to inculcate? It's working right now.

SPEAKER_01:

Yeah. Yeah, you know, interestingly, um, I can remember working on machine learning solutions um, you know, years ago, where my board and my investors were asking, like, well, how well does this work? And I'm like, it works, you know, it works great. Well, most of the time it works great. Um, you know, sometimes we might have a huge we need to have a human kind of look look in look in on this. Um, this is before we had terms like human in the loop and and things like that. And sometimes there was a scowl of like, oh, well, you know, we that's not scalable if you have humans in the loop. Um interestingly, today there seems to be a new acceptance and tolerance of humans in the loop. Um, it's almost comforting, I think, to a lot of people that you know the technology isn't so perfect that you wouldn't still want a person overseeing um, you know, with with the ultimate domain knowledge and experience influencing um you know what would be largestly largely an automated system or an AI system. So it's sort of interesting that that you know culturally we went from being like you know, this thing should you know automate everything to well, maybe maybe not everything. You know, let's let's let's keep some humans around just in case.

SPEAKER_03:

I think the human uh cognitive skills are way ahead of any LLM is at the moment. And at least in the in our lifetimes, the three of us here, I I think there will be humans in the loop for many, many decades to come. It's not going to get eliminated. I think I could I probably may write a more elegant email in today than when I wrote maybe five years ago. Uh but I'm not so sure that that is the top of the line problem that you have to go after and solve. It's a question of, you know, uh recently when the crowd strike thing happened and the world was melting down, seems like some companies came back faster and others suffered, others suffered very long consequences, like Delta Airlines was down for many days, etc. And those are situations where I think um, you know, uh a robust model could have helped their recovery, but they still couldn't have prevented crowd strike. I mean, yeah, it was totally out of their control. So you know, the recovery is where where it keeps coming. Um it's great to see that uh MVL is uh fostering uh a lot of uh Seattle entrepreneurs and um our community, uh our listeners are all uh you know uh to be or uh uh entrepreneurs. And uh I it's I'm so glad, Jay, that uh you're doing such great service uh to the community. And uh it's fascinating to hear how you are so curious. In your entire time you spoke, I could hear the curiosity in your voice. You're curious every minute. What can I find? Is there something here? Some and uh and I think that's the most important thing. I hope every entrepreneur out there uh gets out of this conversation is curiosity is what keeps this thing going. Otherwise, uh you know it it would be very boring and probably very painful.

SPEAKER_01:

Yeah, yeah. Well, you know, more more so now than ever, there's so many things to build. Yeah. Um the question is, um, more so than ever, and you guys all will resonate with this, is the timing.

SPEAKER_02:

Yeah.

SPEAKER_01:

Um it's easy for us techie guys to kind of see into the future. We just don't know the depth of the future. Meaning that, you know, this this this um no-brainer idea, you know, could be now that the technology can can support this no-brainer idea. Question is, is is the public, is are the enterprises ready for it? Yeah, and getting that part right, I think, is probably um even more tricky now than before because again, there's so much stuff that's obvious to build now.

SPEAKER_03:

Great. Sharish, back to you. This was fascinating, Jay. Uh, we could talk for hours. Maybe we have to have you back soon.

SPEAKER_04:

Yeah, I can come back. No, thank you, uh Jay. This was fascinating. I have actually many more questions. We definitely would love to have you uh come back and talk about how you're selecting different LMs and all of that. But uh fascinating discussion. Do let us know when you have your virtual problem agent ready. Uh, you know, I we all tear our hairs out uh trying to plan our journeys. Uh so anything that helps us in that regard would be uh amazingly helpful. So good luck with that and uh uh love to see more styles come out of every other.

SPEAKER_01:

Awesome. Thanks, guys. Great chatting with you. Thank you.

SPEAKER_03:

Thank you for listening to our podcast from Startup to Exit brought to you by DI Seattle. Assisting in production today are Isha Jen and Mini Verba. Please subscribe to our podcast and rate our podcast wherever you listen to them. Hope you enjoyed it.