15 Minute Founder Podcast

15 Minute Founder E7: Cracking the code of AI integration for your business

Courtland Nicholas Season 1 Episode 7

In this episode of "15 Minute Founder," Alex Levin and Rebecca Greene delve into the rapid advancements and transformative potential of AI technology for businesses. They reflect on their initial skepticism and how recent developments, particularly with tools like ChatGPT, have significantly changed their perspective. The discussion highlights their experiences with AI's capabilities, such as improved customer interactions and voice recognition, and contemplates the future of AI integration in their company. They also emphasize the importance of strategic investment, fine-tuning AI models, and maintaining robust QA and testing protocols to ensure AI systems perform reliably and effectively.


Main Topics Covered

- Initial skepticism about AI's potential in business applications

- Experiences with ChatGPT and its transformative impact

- Advancements in AI capabilities, including voice recognition and memory

- Strategic investment in AI technology

- Importance of fine-tuning AI models for specific tasks

- Challenges of integrating AI into existing business processes

- Necessity of robust QA and testing tools for AI systems

Alex Levin:

Hi, this is Alex Levin, and I'm here with Rebecca Greene, and we're here to talk about 15 Minute Founder, and we thought we'd talk about AI today, so definitely it is a popular topic, but this is more than that. As founders three and a half, four years ago AI technology wasn't in a place where we felt it was going to be critical for our business. And, you know, even though we understood we had a lot of data in our business that could be used to build AI agents and Could be used to automate what, you know, was happening used to improve experiences for our customers. It wasn't easy enough for us to build on AI and wasn't part of the bed early. I'd say today it is. And, you know, we're trying to help founders think through how they need to use AI, whether they should use it and, you know, what they're going to do next. So I guess Rebecca to start, when did you know that we would start investing more with AI?

Rebecca Greene:

Yeah. I mean, certainly when we started seeing the power of chat GPT, like everyone else, it got us much more excited about the speed at which AI could get good enough. And I think even in the early models, when we did some experimentation, it was good enough for fun use cases for personal use cases. But really in the last six months, I think we've started to become very, very excited and surprised by what it can do to the point where it, I think the moment for me is when it felt like it would be. Better than a human in some cases, not the threshold has always been is it accurate enough? Or is it as good as a human? And for the 1st time, I experienced something where I was like, oh, that's actually better than a human. And it's I was having a conversation with ChatGPT in a voice setting. And I mentioned it was my birthday at the beginning of the conversation, we had a conversation about all kinds of other things. And at the end, the AI remembered to say, oh, and I hope you have a wonderful birthday as we closed out the conversation. And I think that's the kind of moment where I was just like, oh, my God, this could become better than a human.

Alex Levin:

Yeah, I remember the day like that happened. You were like telling me about this thing that I'd remembered, but I think, you know, for a lot of people that, that was it. Yeah. I'll make fun of like Amtrak. Like I remember calling Amtrak. They were one of the early ones to start using bots on voice and, you know, would come on and say, what do you want? And I'd say, well, I'm interested in X. And instead of like, in understanding anything about what I was doing, I imagine what it was doing is basically doing a search, finding the first result and reading that back. And it could have nothing to do with what I'm asking about at that point. I'm yelling at it saying representative, representative, and like, it has no idea what's going on to all of a sudden, yeah, you had me play with the same thing where all of a sudden. We were playing with our voice agent on top of chat GPT at the time, and I'd say the things that struck me were one speed of response like we were able to get speed of response much better to interruption handling like I could interrupt our voice agent and like it would still work. Three. When I asked questions I didn't know the answer to can handle that smoothly. And then, yeah, to your point, sort of remembering information. So if I said it was a nice day, you would then like talk about that later. Or if I like really, you know, gave it some personal information, it would be able to like incorporate that into the next answer. So I think, yeah, that was the first time for me. Even more so than like the, when I was like chatting, like, I don't know, I'd done like texts back and forth to chat GPT, but when I saw some of the capabilities we had built on the voice side is when we said like, definitely like we need to be investing much more than we are today, if I think about like our next steps, You know, we weren't ready to just go in, you know, bet the entire farm. Like, I don't think we were saying, Hey, we're going to stop all other development and only do this. Like, do you look back at it and say like, we should have invested even more from the beginning? Like, have we moved fast enough to do more with AI?

Rebecca Greene:

Well, you know, my take, I think we never move fast enough to do anything we want to do. But in general, in terms of our choices around resource allocation, I don't think so. I, and I think there's a lot of, I don't think I would have done something different in hindsight. Because I do think the speed at which different models have gotten better at different things has, has materially changed who we might build on top of, or how we might choose to build on top of multiple models or leverage them. I think LLM companies are clearly starting to carve out lanes into how deep into the experience they're going to be going. And I think. Like, they could have literally wiped out a lot of the investment we would have made. In the previous year, just by leapfrogging that and so. It's starting to become more clarified, like where they're looking to invest and where they're not looking to invest and how companies can add value on top of those. So I don't know. I think a lot of the early stuff we would have invested in, it would have gone to waste. Honestly,

Alex Levin:

Yeah. Yeah. I mean, I look at it and I think it's, you know, it is amazing how much it's developed in a year. I think it would have been nice to sort of have, let's say one or two people sort of playing with it like a toy, you know, trying, you know, and more or less that was you and me and maybe an engineer, but it would have been nice. Maybe have one or two more people playing with it, but it wouldn't have been for any ROI. Like it wouldn't been building a product for customers yet. It wasn't ready to go and do that. You know, there were too many uncertainties. But, you know, that would have been nice. Like at this point, I agree with you. It does feel like it's a little bit more clear what's going to happen. And I think the hard question now is becoming at what point in the stack do we invest? Because it's clear that, you know, open AI believes they're going to continue to make not incremental improvements, but exponential improvements, and they're going to eat a lot of people's businesses. And we've seen like a lot of the early apps that were built on top of open AI, like don't exist because open AI just can't do it, you know, or as an individual or a company, I can do it on top of an open AI easily enough, but you know, it does feel like there are going to be some critical functionalities that open AI doesn't have. So one example we've talked about a lot is, you know, testing and QA. You know, in a traditional environment, like you have a code that goes out, you run it once you see if it works or not, it's done. But in an environment where every time you run it, there could be a different answer that it gives you can run it once you have to run it hundreds, if not thousands of times. And then you have to have a new way of assessing whether that was or was not the thing that you wanted it to do. So I think, like, that's a whole area that companies are having to figure out How they're going to handle whether he's a third party of themselves, you know, we think the tuning of these AI agents and models is still going to be something that happens a lot outside of it because there are different ways of tuning. There could be in the prompt rag, you know, so on and so forth. So there feel like there still are a lot of opportunities, but it's definitely. Yeah, you have to be aware of the fact that OpenAI is going to continue to get better or it's, yeah, your business is going to get eaten alive. So on that note, like, you know, we talk about a lot, like what are the areas where we can differentiate beyond just like where in the stack we play, you know, are there areas you think that will allow us to have a sustainable advantage in AI and AI agents or do you think that everyone, everything's going to become monetized?

Rebecca Greene:

I don't think is any different than. Any other kind of technology in the sense of, you know, initially when APIs came out, people thought, Oh, just connecting to those APIs was enough. And then you found out, no, you actually have to have a use case in mind and a set of customers you're solving for. And instead of distribution that like captures that customers in their imagination and solves the real business problem for them. So I think all of that is true. It's, it's why personally, I would still. Particularly want to play closer to the customer and connecting the dots on use cases and what the technology is capable of. Because I still think there's massive gaps in that. Like, it's almost like yeah, it's kind of like APIs. Like, it's so easy to get something built on top of an LLM right now. But taking it from that, like, that's something to something actually that customers derive value from can put. Money against it with a business case and in production for their customers. It's still very far from that.

Alex Levin:

it does feel that way. And then like the other one that, you know, I think I'd like to see play out is, in theory, all the data we have from real human agents going and doing calls should allow us to fine tune these. AI agents better than anybody else because we have so much of this information, so much of this data and a good, we intentionally set it up so that we have it in good shape to do it. And, you know, in theory, we should be able to work with each customer on their specific data to make it specifically tailored for them again. Cause we have this data, but I'm interested to see like, will the data really be a differentiator or is the answer that these LLMs are so good. Basically like. You know, you do two calls and they figured it out and like, you know, us having a ton of data is not helpful right now. It seems like it's the former, not the latter. I mean, like you, you see open AI going and paying reddit or paying a newspaper, you know, millions, if not hundreds of millions of dollars for access to the content because they needed to all a large volume of content to do training. So I suspect that the data we have is going to be very helpful. But, you know, you know, whether it's You know enough just to train on sort of general content or necessary to train on like very use case specific content like I feel like still to be seen.

Rebecca Greene:

Yeah, I think a lot of what we've learned through our testing. And also what we read other people concluding is. Agents, they're just like us, they're like, humans, like, the more you try to give them in the prompt to the worst, they get at any individual piece of that prompt. And I think there's other examples like that that. I, I actually think. We will find that like much more fine tuned models with much more specific data with very strict tasks in any given part of the conversation, at least we'll end up performing way better because the more you load on top, not only the slower does the responsiveness get, but also it just gets more confused, just like a human and you've tried it, you're human. So many different strategies, so many different products, so many different problems they can solve. I mean, there's a reason why customer success is separate from sales, even just for reasons like that, I think we're seeing the same thing, honestly, in AI models.

Alex Levin:

yeah, for sure. So you know if you're if you're a founder today you know thinking through you know how you're going to do something with AI. Sounds like, you know, our basic recommendation is, you know, start thinking of it as a toy, you know, play with it, see what you can do with it. When you get excited about something though, just because you can do a perfect concept doesn't mean that it's ready to go to production through Rebecca's point, it's going to take somebody, whether it's you or somebody else, real time and effort to turn into something that's actually useful and practical, For a business to use. I'd say the last piece that I think people underappreciate those, you're gonna have to have very good, you know, Q. A. And testing tools. So one for building it. But two, once it's in production, you know, I think all these companies that are out there just saying, Oh, you can use our AI to build some kind of. You know, summary or agent or whatever. It's missing the critical piece, which is now that I have in production, how do I know that it's not out there saying buy a house from, you know, this company, even though like your business is about something completely different. You know, yes, you tested it once and you saw it didn't save it. That doesn't mean it's not happening all the other time. So I think that. Companies are going to have to start integrating very good QA and testing tools on top of any kind of A. I. They use to know that it's functioning as they intended with the tone and results that they intend. But with that any last thoughts Rebecca?

Rebecca Greene:

Yeah, just the last thought was in the same way that, like, you know, it's not deterministic just like humans. So that it's kind of strange, right? It's both what's good about it. That's why it can start to fulfill the job of humans, but it is also so foreign from, like,

Alex Levin:

Yes, not software. So you're not going to be able to build it in the same software development cycle. You're going to have to have a completely new way of thinking about it. It's much more like running a contact center than it is like running an engineering software program. Cool. Well, with that, we're going to sign off. Thank you very much.

People on this episode