PLUS Podcast

Build vs. Buy, ROI, and the Future of AI in Insurance

PLUS Season 1 Episode 1

Most AI implementations fail, but why? Cameron MacArthur (AI Insurance), Josh Butler (CompScience), and Garrett Koehn (CRC Group) break down the myths and realities of AI in insurance, sharing what's actually working and why the smartest companies are focused on augmentation, not replacement. 

PLUS Staff: [00:00:00] Thank you for listening to this PLUS podcast. Before we get started, we would like to remind everyone that the information and opinions expressed by our speakers today are their own, and do not necessarily represent the views of their employers, or of PLUS. The contents of these materials may not be relied upon as legal advice.

And with that, let's get started with our conversation. 

Cameron MacArthur: Why don't we start by just everyone introduce themselves and then we can go through, high level, each one of these questions. That sounds good? I'm Cameron MacArthur. I'm the CEO of AI Insurance. We make an AI-powered software platform for running MGAs and MGUs. Josh?

Josh Butler: All right. I'm Josh Butler, CEO, and founder of CompScience. We're a series B MGA InsureTech focused on preventing workplace injuries using AI. We have a combination of products that include rapid pre-inspection with a mobile app. We have [00:01:00] video analytics using CCTV, as well as  using LLMs for analyzing claims and triage and getting ahead of potential claims issues. Good to be here. 

Garrett Koehn: Great. Gary Koehn. I'm the Chief Innovation Officer and President of Executive Lines for CRC Group. Although I'm, as many people know, I'm kind of starting to step back from my corporate roles, but I still have those titles, so here we go. 

Cameron MacArthur: Great. So today we're going to be talking about AI at a high level.

So, we've got a few questions to go through. And I think it'll be interesting to see if our opinions align or differ on some of these things. There's obviously a pretty good spread here, and that Garrett has the visibility of the large commercial brokerage and then also I think the fine tooth comb of working with a bunch of startups at the seed stage, right through your investment side.

And [00:02:00] then Josh and I both are serving the industry with AI, but also are consuming AI products ourselves. Like I'm sure Josh is, but we're also like making purchases of other AI vendors that are serving us. So yeah, I think there's some good right here. Do you guys want me to just kick it off with our first question or do you want to say something here?

Garrett Koehn: Well, we can dive into it. So, you know, our first topic that we thought we'd talk about is like, “what's one insurance workflow that you wouldn't automate or that you struggle to automate?” And so, I'll give a couple thoughts on that. I think certainly for any company, but a large company maybe in particular, if you're going to put anything on its scale, it's got to work well.

And so, when you look at--and AI is somewhat ubiquitous now. It's across a lot of different spaces, a lot of different areas. So, I'll be a little bit narrow in some examples, [00:03:00] but you have to kind of think about “is this something that we use internally or is this something that's going to be client facing?”

And that's--it gives us a lot of variability on our likelihood to use it and how much. Obviously, if it's something that we're using internally, we could try it out. If we feel like it's reliable enough, we can just start using it and go. If it's something that's going to be client facing, then that has another level of scrutiny and a lot of caution.

And so, like an example has been real simple policy form reviews. Policy form reviews could be used for anything such as, you know, issuing a policy, making sure the policy matches the binder policy, check the policy once it's received. 

That's something that we used to do internally ourselves, and then it went offshore to the India and China predominantly. And now it's something that probably AI is going to swallow. So, like that's something that we're using internally. It's pretty easy for us to use on our own purposes, but if it's something that we're going to start using externally and pushing things out to clients, it takes just a higher [00:04:00] level of scrutiny.

I think you could get more esoteric into other issues that are more advanced, you know, underwriting in model drift, things like that. Whether you should let AI, you know, have full underwriting authority or not. We kind of talked some more about that. I'm actually curious, like, Josh, like how are you, are you using it for underwriting now?

You must be somewhat so like yeah. How do you use it? 

Josh Butler: Yeah. I think that, you know, there's something where brokers, especially on the large accounts, we focus on largely accounts bigger than 100k. You know, they're really looking for a human to work with. Like these are like, I think insurance itself you know, as, as a product, it is largely about trust.

And there's something like deeply human that we're looking for in these deals that we're making, especially in the large end. Certainly, there are sort of like, lots of small [00:05:00] policies that are just transactional, just looking for a price. But, I think if you completely automate the quote process for, especially for mid-market and large accounts, like very quickly just become a price point.

Whereas, you know, there's a whole lot more that we're offering beyond a price for a policy. And the context includes like, “how well do we know who we're working with and how much do we trust the information we're getting from our broker partners.” Right? Like trust is a big element of this.

So yeah, we absolutely use a ton of AI in the back end to understand the risks more deeply. Like we have this app we call Preview that allows the insured to go around and do sort of like a self-inspection in just a few minutes. And it gives us a ton of intel. On that customer that, like most underwriters wouldn't get direct intel.

We also just have the experience of knowing how much of an impact we can have on different kinds of injuries. So, there's, looking at the history, we can run [00:06:00] the losses through our AI models and understand how likely we are to be able to drive down losses and actually provide a slightly better quote than the competition as well.

We also think about these as like, you know, every customer is like kind of a snowflake, especially in the mid-market and large, large space. So, like, we put together custom packages for most of these customers that it comes down to understanding what are their challenges, what's their organization look like?

Like, how are their policies, like their safety policies, return to work policies, how are those? And like making sure that we're really delivering a comprehensive solution, not just a price. 

Cameron MacArthur: And would you ever use AI to put those packets together? 

Josh Butler: Absolutely. But it's always, we're always going to deliver it with a white glove human touch.

I, okay. It's just, it doesn't make sense to take the human out of the picture for a, you know, 100k+ policy. These are [00:07:00] important deals for, I mean, especially for the customer. It's a big deal for that customer and we wouldn't want to take that human element out of it.

Cameron MacArthur: And Garrett, you had mentioned the distinction between going out to get your AI versus using it in house and you're like, “hey, if I'm going to a vendor, there's a higher degree of scrutiny.” Is there any insurance workflow process? 

Garrett Koehn: I guess I'm at somewhat more where the AI is being used rather than who's doing it.

So like I know we're going to talk about build versus buy later. But it's more who is the AI facing? So, we haven't done a lot of AI stuff ourselves. But if it's something that we're using AI for just that confidence level, you need to get it in front of a customer. It has to be pretty high.

And figuring that out's a little bit, I know it's new. It's different. It's different than like when you outsource something and you're like, “okay, this seems pretty good.” It's AI, it's like spitting stuff back at you and you kind of have to figure [00:08:00] out, “okay, what's the efficacy?” And you kind of double check things by humans and you eventually decide it's good enough or not.

Cameron MacArthur: We're seeing that also that like the areas that we're struggling to automate or are slower to automate are external. Versus internal and usually external to our customer. So, for example, we automate bill review for like defense counsel invoices. We'll read them and automate them and flag them.

And one of our customers said, “hey, you're already reading the bill, and you're doing the data entry, and then you're flagging everything. Can you just email back the attorney and do the negotiation with their billing department as well and just mark up the PDF and say like, ‘hey, we're not going to pay for this. Here's why.’” 

Because they're like, “really all I'm doing is copy pasting your thing and sending it over to them.” And that is really to us [00:09:00] exactly that linchpin, right? If our customers have a contract with us, we feel very comfortable giving them a ton of AI. And it's when you take the step to say, “okay, now like one of our agents, like AI agents or our models, is going to communicate directly with one of your vendors or your customers on your behalf.”

That's something that we we're like, “hey, you know, we want to form a formal partnership, have you be the first one, you know, we're excited to do those things, but definitely it requires the right partner.” 

You know? To know what they're getting into before you say like, “yeah, we're just going to talk to your attorneys, or talk to your insureds, or talk to your brokers, not as you, but it's sort of splitting hairs at that point, right?” They get the email back from your system. No. Even if you say like, “I'm an AI agent of the system and not, you know, this MGA or this [00:10:00] MGU,” you're still kind of fitting in that bucket either way. 

So, I wouldn't say that we wouldn't automate that, but it's definitely going to be last. Any of those things where you're kind of representing as the person. And I also am a, I think sort of maybe a counterpoint on the underwriting in general in like the specialty and commercial space.

I sort of think that that's going to be the last to go. I know that there are a lot of AI underwriting things, and I don't, I haven't seen them be effective. I don't think there's so much meat on the bone, in the workflow of getting this information and serving the underwriter that, yeah, I think we're going to see every part of the workflow get deeply AI long before the actual underwriter is replaced with AI. 

So, I [00:11:00] wouldn't really touch if someone was like, “hey, can you automate our underwriter away?” I would be like, “no.” 

Josh Butler: There's so much other stuff to do. Like you don't want to replace your doctor with an AI. You want to enable your doctor with an AI like radiology.

We've been on the cusp of being able to, like, say we're automating radiology for the last like 20-30 years. But just like in medicine, you still need someone, an expert, to sign off on whatever the recommendation was. 

And like 95% of the time, or maybe it's like 99% of the time today, the automated diagnostic is correct. You still want someone there to review it and sign off on it. And we think about it the same way for underwriting. Like we want to automate as much of the work as possible, but it's intelligence augmentation, not artificial intelligence is what we're shipping, intelligence augmentation.

Garrett Koehn: I was thinking about that today. Like this topic says, yeah. Getting ready to have our call and, you know, it's not that long ago [00:12:00] before we were talking about, you know, first machine learning and then AI, that we were just using all the algorithms. 

And, you know, I saw with some frequency, and I won't say names, but since this is a PLUS podcast. When a bunch of the new cyber facilities launched, there was a lot of like, all the ways to make underwriting that as long as the accounts were small, you, you put the data in, it's your number out and you can run with that. 

And with a pretty high frequency, not with all of the startups that were doing it, but with some, there were periods of time, especially when they started where, you know, we would have a market declination from 30 markets.

And then we would get like a $3,000 quote from the algorithm. And I think about that parallel with as we move one from more AI and as we're all using AI, you see, like these hallucinations, you see it, forget stuff that you put in before. Anybody that's done a [00:13:00] big project on AI where you've like, put a bunch of information in over time, you'll know that you have to like keep correcting it, get it back on topic, get it back into the, you know, what you're working on.

And so it kind of, wait. I was thinking about that parallel, how, if you build the wrong algorithm, you're going to get bad answers. This almost kind of puts that problem on steroids where it's like, it's not just the algorithm where you tell the AI, you know, these are the things we want to await from an underwriting standpoint.

These are the things that are important that we want to sift through. But then, you have this added layer of kind of mystery going on behind the scenes. Like, “did it actually review all of the stuff that we wanted it to review? Did it wait it properly, has it changed the data? Did it forget something?”

It's more complicated. So, I think it is going to be interesting to watch as time goes on. Like at what point does it start replacing underwriters? And it'll probably be small accounts first again same way the algorithms did. And some of that's probably starting to happen already.

But, you know, there's a lot of skepticism around [00:14:00] that. And then, of course, there's like new NGAs forming up and trying to ensure just the AI risk for the--that's the big risk we all have with these things. Once you make something customer facing, or Cameron's talking to the lawyers, or I'm pumping out thousands of policies that have supposedly been checked, it changes our liability.

That's a lot different than if we were using it internally. We now have liability with it. And it would be interesting actually to watch the insurance side of that if, you know, do these new startups that are targeting AI have a place in the world where they're going to be focused on that, doing E&O for those?

Or is it just going to be covered by the carriers we're using currently? 

Cameron MacArthur: So, yeah, me, I don't think it's any different than your existing E&O, right? Like there are companies today that do bill review and negotiate with the attorneys and so I don't really see why it becomes question like, who owns the liability?

Josh Butler: I mean, I guess like software's always had these really great favorable contracts where, software makers generally don't take [00:15:00] liability for hardly anything, just like use it as is, right? Does it come--it is interesting, as software systems start to become safety critical. 

For instance, like a self-driving car, you start to have a higher standard for whether, you know, it's okay for it to just crash and bonk in the middle of the highway, right? But for the most part, software kind of like, is not taking the liability.

And, you know what, like, it probably shouldn't in these cases. Like it does shift, like in the case of like this, like underwriting use case, like it can shift the role of the underwriter from more, from like a transactional kind of analysis to more of a portfolio analysis where you're thinking more like as a group of, you're like under almost underwriting the algorithm, aren't you?

And then trying to understand like outliers and risk profiles for this set of transactions that the AI has led. 

Cameron MacArthur: Yeah, which is going to be super important. I [00:16:00] mean, like those, so Garrett mentioned the cyber. We were with, we'll just say modern techie company, that was getting us our cyber insurance.

And one year I get my renewal and my cost has gone up by an order of magnitude, a whole extra place value and I was like, “what is this man? You've had more data on us, right? We've been around for longer. We're, you know, like if anything we are clearly a lower risk than, than before.”

And they're like, “well, cyber costs have gone up.” And I'm like, “I'm well versed in like, you know, the cyber markets and where they're tracking, they're not going up by, you know, a order of magnitude.” And the underwriter emailed me back and literally said, “well, what do you want to pay? Like, what would you pay?”

I have the email, and I replied, very [00:17:00] frustrated. I was like, “that's an insane question, right? That you're just like, what would you tolerate price wise? Like that's how you're doing your pricing.”

We left and we got insurance, you know, you would say the old fashioned way. Which was like, “hey, having a human actually understand our risk.” But so that point of, “hey, you need your humans really underwriting your algorithm,” I think was really salient to me. Because that was the huge gap there, right?

Their machine gave them a number. The person who was “doing the underwriting” was like not sophisticated enough to understand what that number was or where it came from, or how to adjust the machine to get an accurate number, or how to price. 

And it's like, “all right, you can't just completely burn the ships and say like, ‘what the robot [00:18:00] says is what we're going to use.’”

And, so yeah, that resonates. 

Josh Butler: Right. Running the ships reminds me of a recent product release here. We'll talk about that next.

Cameron MacArthur: Okay. Question two. Recently released, MIT study says “95% of gen AI implementations fail because tools don't adapt to enterprise data or workflows.” What are some things you've all done to ensure success in the implementation or adoption of AI across your org?

Josh Butler: First of all, I don't think we should just take at face value this is a bad thing. I think this is awesome. And what does that mean? That means that companies are trying 20 projects and people, and has the excitement died down at all? Even if 19 are failing, I'd have to also ask, what's the methodology?

We had a hackathon. We invited [00:19:00] underwriters, business development managers, engineers, AI researchers, customer success. And we had them all, bought a bunch of beer and we put them in an Airbnb in Tampa recently, and they built like 20 products overnight. Overnight they started with some like core data, some use cases, a little bit of context on like problems they need to solve and what we need as a company to grow.

And they build 20 products. And you know, what if, if 18 of them are bad, it's a massive success. 

Cameron MacArthur: So yeah, if you look at just R&D spend, right? Like the companies that spend the most of their revenue on R&D, it's all the fastest growing. You know? Twitter spends like half their revenue on research and development.

Josh Butler: But the cost of R&D, the [00:20:00] cost of a project like this has fallen by an order of magnitude or more. And what does that mean? That doesn't mean I'm going to spend 10% what I used to spend on R&D. I'm going to double what I spend on R&D then yeah, I'm going to double it because now my ROI on R&D is that much higher if the cost per project is so much lower.

So, I, yeah, I don't think it's a bad thing. 

Garrett Koehn: I think there's fundamental things behind it as well. One we discussed already, so I won't bother repeating it too much, but it's just the whole idea of once something's customer facing, there's this higher level of scrutiny and confidence in need that probably prevents people from implementing some things.

You know, we've certainly got some things that fall into that bucket. I suspect sometimes, for AI to work well, you also not for everything. Again, it's ubiquitous and across all sorts of different stuff, but you also need like functional data, maybe as a baseline, you know, for certain things to [00:21:00] work.

And a lot of companies are still struggling with getting their data into some sort of usable format for whatever it's they're trying to do. And so, I think, you know, beyond, and when that came out, there was a lot of like shock value and on X and places like that. But I think there's also just kind of like some foundational issues that are just going to take some time to kind of like work through, you know, like maybe you don't have the data format in the right way.

You need to pump the brakes a bit. Maybe you don't quite have the confidence level that you need yet, and what's going in front of an actual customer or putting yourself in front of a customer, using that AI as a tool you're totally confident with. And so, I think there's other things like that that relate to 95% of implementations.

I would guess if you wordsmith that a bit, that's probably not correct. Probably weren't companies actually implementing a hundred percent of this stuff and 95% of it got shut down. It's probably more like POCs and [00:22:00] stuff like that. 

Josh Butler: Yeah, I did. I did $20 million projects or are these like 5k projects, you know, that I did in my spare time.

Cameron MacArthur: Yeah, I can answer this question. I read the whole MIT report because everyone's freaking out. 

Garrett Koehn: Well, someone actually read the report.  

Cameron MacArthur: I read the report. Yep. And it's so hard to find too. You look it up and it's like, “here's a Forbes article that cites the interview the guy had.” I was like, “no.” 

So, I found the report. There were a couple big things that stuck out to me. So, one was the definition of success was if it showed impact to creating revenue out of ROI within like six months. 

Josh Butler: Okay. 

Cameron MacArthur: So, growing revenue. So that was, there are a lot of things that you can do to like save yourself a significant amount of time in human cost without growing revenue, which is I think, why we're seeing this gap.

Because in the same report, they said workers from more than 90% of the companies said that they were using personal AI tools every day for their job. [00:23:00] And so, everyone's like, “we're all using this all day so we know this works.” So, it's one of those, you kind of don't care what the stats say because we're all using it all day every day.

They said organizations that partnered were twice as successful, so twice as likely to drive revenue, which also makes sense to me because it's kind of a big organization implementation problem. So that that resonated because it's sort of like, “hey, if you're trying to make a core systems change, like we've done it a hundred times and you've done it zero times, so we're just like going to have more reps there.”

And then the other big one was most of the money was being invested in sales and marketing. Most of the cost savings came from back-office augmentation automation. So yeah, front office gains are like visible and board friendly. 

Josh Butler: And then the real, like ROI is [00:24:00] all the people in the back office using ChatGPT, and none of that was taken into account in the, in that staff you're saying?

Because they were only focused on the revenue. 

Cameron MacArthur: The headline remains the same. And I think it's all kind of related, which is like, “hey, if you're implementing these things, you know, a big part of the failures is because they're like, “well, what looks good to my board? Let me do this like advertising thing.” 

And so, the study was like, “yeah, a lot of these are failing.” But the impression that I got from the study was, the failures are obviously not because this AI thing isn't working. Otherwise, we wouldn't all be using it every day. It's because implement, like organizations are not actual conclusion in the article.

Josh Butler: I love it. That's great. 

Cameron MacArthur: That's at least my conclusion from reading it. I do think that there's a case of some AI vendors [00:25:00] not being sophisticated enough. Like there was quite a bit of, there was an example with like a legal team where they were using it to do drafting and the paralegals were like, “this new vendor tool is fine, but it's not as good as just using ChatGPT.”

And so, they were like, “yeah, we are just going back to this other thing.” And the vendor ostensibly is using this under the hood, but it's not as good. And so I think that's at least a factor where like the models are getting good so fast and their context window is so large and they're so much more flexible that the bar is pretty high, I think for success. 

Josh Butler: Well, I mean there really is art here and it has to be learned. And one of the key challenges here is the capabilities of these general purpose, like [00:26:00] LLMs, is constantly changing. And if you're trying to architect a product, the first thing you need to understand is like, “what is sort of an MVP or minimum lovable product experience for the user?”

You don't yet know what the models are actually capable of until you actually try something. There's real art here and I think that it's absolutely worth the investment trying, just keeping on trying and throwing stuff at the wall. There's a really great quote by Demis Hassabis, the head of Google DeepMind, about how they've actually never had a failed experiment. 

And the way he justifies it, he says, “well, the way we architect our experiments is we look to cut the hypothesis space in half with our experiments. So, if it works, you learn something. If it fails, you'll learn [00:27:00] something, but at least you learn something no matter what along the way.”

So, I think that all these supposed failed projects are actually organizational learning, or at least they should be along the way. 

Cameron MacArthur: Josh, have you done any AI implementations? 

Josh Butler: And so, it's like kind of our, all of our core products. I'd say we’ve; we have 5x more products today than we did a year ago just because so many of these LLMs and the visual vision language models and mixed with the traditional CNNs, the convolutional neural networks that you used to have to use for computer vision. Like when you put them together and you create these compound systems, you can just start building products so fast.

I mean, I've started, I guess maybe it's a little hack and eat or trite to hear about a CEO that's vibe-coding on the weekends these days. But [00:28:00] yeah, I've like vibe-coded like plenty of prototypes of new products that are starting to take off. I'm not going to pretend like they were real products when I was done, but they were interactive prototypes with like real looking data and people like, it was actually better passing that along to engineering than an extensive list of requirements. Because they got what it is and why we're building it and how it should feel. And, you know, you didn't lose all of that in translation. We've got tons of projects right now that are leveraging these models. 

Cameron MacArthur: I've done the same that I thought was really interesting. We had this big new feature that we wanted to build. And I coded the whole thing out with Claude Code. And then made the pull request. And our head of engineering was like, “this is great.” I was like, “oh, you're going to like [00:29:00] merge this?”

And he was like, “oh, absolutely not. All of your code is completely unusable. I would never put it into our code base.” And I was like, “oh man.” And he's like, “but I can now see everything that you actually want. And with this, I can set this up in a day.” He's like, “I'll do it from scratch in a day. Without this, it would've taken us a week or two to be like, but we can just see it's sort of like the best version of product requirements because then they can just see exactly like, what should it look like. How should it function and interact.”

Josh Butler: My team's actually using one of my prototypes as the dubbed out front end. So, they're just like basically replacing component by component.

And the front end is largely being retained actually from a, basically what I did is I just dropped a whole bunch of user stories straight into it and then went back and forth a little bit. And then it actually has well-written, highly responsive UI that looks, it works really [00:30:00] well, and it addresses a lot of like the common challenges with UX basically just out of the box.

Cameron MacArthur: Incredible. Garrett, have you done any failed AI implementations, any of that 95% bucket? 

Garrett Koehn: I mean, just, I just don't like the way of the terminology, I suppose. I mean, we're trying AI stuff all the time. And some things we've implemented more on the internal side. 

Some things we're still looking at that are going to be customer facing. Yeah. For us, I think the, “it's either working or you're learning” example is correct. Because then, I don't feel like a lot of its necessarily been fails. It's just, you know, you're figuring out these are the weaknesses that need to keep improving and yeah, things will get there.

Cameron MacArthur: Great. All right. Garrett, you had alluded to this earlier. [00:31:00] Build versus buy. How do you choose to buy an AI solution versus build one? You'd mentioned customer facing as a component. Is hearing that solutions that are bought instead of built have twice the efficacy or twice the success metrics of the homegrown ones affect your opinion on that?

Garrett Koehn: Well, I mean there's a lot that goes into build buy, and for me a lot of it goes back to the same things that you do just even with software development. Before you put on the added gloss of AI, I think, you know, for companies to build in internally, you have to look at, and I think right now there's a bit of “oh, we can do this ourselves. We have access to a language model, so therefore we can do this thing. We, you know, we don't necessarily need a company.” 

But I think a lot's being forgotten in that process. That's similar to other software analysis. It's typically faster you go to market, you're using a startup, it's better. Or a legacy company, like a lot [00:32:00] of legacy companies trying to get into the space too see you go faster to market. The startup companies might have better teams of people with better qualifications to work on the implementation than we might be able to hire as an insurance broker, you know? 

So, the time to market might be better. The talent and expertise might be better. Customization or scalability could be issues. If it's something that we're going to build, we're going to wind up having to, we're going to have the build costs. Then we're going to have the maintenance cost. And I think with AI, well yeah, I keep with my simple example policy form reviews.

I can throw any insurance policy onto ChatGPT right now and say, “review it.” And it will, but there's no good user interface for me to do that at scale and connect it to policy implementations or something like that. There's not deep vertical knowledge, so if I'm analyzing a Work Comp policy versus an E&O policy, versus a cyber, you know, [00:33:00] contract or GL or something those could have a lot of deep knowledge that's a lot different.

That requires a lot of training and obviously companies that are focused on those specific niches are going to build presumably really great user interfaces. They're going to be able to scale implementation across a lot of different companies. They're going to have deep product knowledge. They'll be working on customization so that we can get our services met.

And so, you know, there's a lot that goes into that build versus buy and we've had some internal discussions around it and you know, a simple example I love using is always, well, like when we're building APIs originally, yeah, we could build APIs ourselves. We can't monetize it.

This is another factor to think about beyond our own use case. 

So, we're going to have to maintain it and keep, make sure that they're all up to speed. We're probably not going to be that good at building stuff around it. We're probably going to have a big backlog. You know, that is an area that becomes more logically [00:34:00] suitable for a startup that maybe is building tons of APIs and a huge network.

You know, so like ultimately in that case we, you know, we went with Harold APIs and a vendor, but I think AI is going to have a lot of that as well. And I think, like you were saying. you know, you're both AI companies technically, but you're both working with other AI companies.

So, there's going to be different areas. The companies are really good with deep vertical knowledge of what they're doing and, and in a lot of cases, it's going to make more sense to, and probably be cheaper and faster and better ultimately to, you know, be partnering with companies that have those rails.

Cameron MacArthur: Yeah, I mean, I agree strongly from an access perspective, right? So, when we talked about how like the core models are so good that you have to really do quite a bit to even hit the efficacy of them. And when we were setting up like our submission [00:35:00] parsing product, for example, one of my through lines is the team would work on it and then I would just test it myself against standard ChatGPT.

And I for a while, for months I was like, “what's going on, ChatGPT's still better. Like, how?” And they'd say, “oh, give us, give us a month with one engineer.” And then it's like at the end of the month, ChatGPT was still better. And I was like, “what's going on? Make it better.” And it wasn't until we kind of kept running up against that wall and then hired like a researcher out of MIT who was specifically talented at AI research, you know, on the bleeding edge of this. And she came in and led that team and then it got better, like significantly better, a lot better. And to me the build verse by there is, “okay, is your [00:36:00] company going to be able to compel an AI researcher out of MIT to come work for them?”

And often candidly, these folks are, I think, not that interested in working at an insurance company. But they are interested in working at a tech company. 

Garrett Koehn: Yeah. Completely agree. Yeah. They want to work with a tech company that's going to have a big exit in a short period of time.

Hopefully. It also relates to another point that I forgot to bring up, which is if you're building it, you're working out of your IT budget typically. You might be working out of a profit center budget if it's something specific to a profit center, but you know, one of those. Meanwhile, you've got, VCs and Y incubators like White Combinator and others, you know, throwing major dollars.

Whatever it is that you're trying to build internally. If it's a monetizable space, which often for us it's not and for lots of guys in the insurance [00:37:00] industry, if it's not, there's going to be tons of money thrown at like creating solutions around that space. And so, you're going to be working out of your profit center budget or your IT budget, competing with your MIT guy who's also armed with millions of dollars focused every hour, on that topic. 

And so, it's another issue to think of. Because if it's a scalable, monetizable space and people are investing in it, it's going to attract those types of workers and if you're an insurance company and insurance broker and you're like, “oh yeah, like we're, we can do this ourselves.” You'll ultimately lose, you will have wasted a lot of money and time moving yourself forward. 

Josh Butler: Absolutely. I mean, on that note, you know, okay. So, we are an AI tech company in our core. So, like, I think the calculus is probably a little bit different for us than for like, you know, a carrier or something like that.

But the two things I really look at are [00:38:00] when I'm thinking about build versus buy. One is like, “is this absolutely core to what we are as a company or not?” And generally, like they say the best engineers are the laziest. Don't do anything you don't absolutely have to do. That's at your core. 

And then I think two, like it's, is there a unique data set? The first one is, you know, a great example is document like submission ingestion. You know, there are a number of different platforms out there that are solving submission ingestion and document ingestion like OCR.

And it just, even though we do vision like, and OCR is kind of like the classic computer vision solution, we kind of figure that there would just be, you know, enough vendors out there where like that's going to get relatively commoditized. So, if we're focusing on what's going to be strategically important for us and driving enterprise value, it was number one, driving down injury rates. 

That's the [00:39:00] number one thing we can do, followed by maybe, improvements in underwriting. So that was the first thing. And the second one is like, “is there a unique data source?” Because like, AI is kind of like, it's just, it's a tool, right?

And frankly, most AI is just like replicated and kind of maybe tweaked for a slightly different use case over and over and over. The models, the neural nets behind. Most of these things look very, very similar. Large language models. Sure, there's systems that are like a bit more complex now, but what really?

But generally, the performance of different models and systems converge as you get incrementally more and more data. And in fact, the marginal returns on data are generally against you creating an AI solution. So, if you don't have significantly better, like in order of magnitude better, or more data than your competition, then you're probably [00:40:00] not going to be able to have a breakaway success with your AI product. Much better to just rely on someone else that actually has a better data set.

Or license that data set if you really want to do it in-house. So that's, I mean, I guess that's the thing. You can't fabricate results with an AI model if you don't have the data in first place. So, I think that's kind of how I look at it.

Those two different elements.

Cameron MacArthur: Last question. Do you want me to read it? I can read it, yeah. How do you foresee AI changing the competitive landscape for traditional insurers versus emerging InsureTechs? 

Garrett Koehn: I think it depends a lot on the space. I'll start with like wholesale as an example. You know, there's some emerging startups that are working on being wholesalers.

And it's tough. There's a big legacy moat around our [00:41:00] space. There's a lot of chicken and egg issues on both sides of the marketplace where you need to have access to all of the carriers, but to have access to all of the carriers, you need to have a lot of premium. But to have a lot of premium, you need to have a lot of brokers.

To have brokers, you need to have access to all the retailers to have access to all the retailers you have. You know, it is like this never ending, you know, chicken and egg on both sides of the marketplace. And so, you know, and in our area, it's pretty difficult to come in as a startup.

And I'm friendly with a bunch of the guys that are starting them. And they're using all the AI tools that should be, and then probably ahead of us in a lot of the technology. But they're behind us on data to Josh's earlier point and we'll never be able to catch us on that front.

And so, I think in a space like that it's questionable. Conversely, you could look at MGAs which, you know, Josh [00:42:00] can speak to. You know, if you've got a good underwriting model and you've got capacity and you can throw a technology on that's better than somebody else.

You probably can do really well. You can. If there's a new and better product out there, brokers will find it and want to use it. And then tools wise, I think it's easier, to the surprise of nobody generally speaking, is you're a startup company to implement these new tools and be able to move quickly and, you know, scale things, because generally you're smaller. And you're making the decisions of a kind of a you know, a CEO level that's fairly close to what's happening with the company as it's going. You know, and so between some of the InsureTechs and the legacy carriers, it always has been. you know, some are going to grow really well using AI tools. 

They'll get acquired, some will [00:43:00] realize they're butting up against legacy businesses that have too many modes around them. And then they’ll pivot or they'll figure out what they need to do, you know? I think there's certain technologies that we will be seeing in time.

You know, I can see the day now and on the weekends somebody can send in an application, you know, maybe it's via email. And my AI answers it and asks, figures out what's missing, while I'm doing whatever I'm doing on the weekend and it collects that additional data that's missing.

And, you know, by the time I'm in on Monday, it's with plan buy-in, it's gone out to market. Maybe it has quotes already because there's API integrations, you know? And that will start to have impacts on being able to better scale big brokers, being able to better scale your best underwriters.

Being able to add a degree of efficiency that, you know, just isn't there [00:44:00] now. Right now, that process might take, you know, a week or two sometimes just to get this mission in properly, get the correct data, get it out to market, you know? So, I think for tools, it'll take maybe a little bit longer for some of the legacy companies to get there.

But they'll eventually get there either through acquisition or just through, great startup companies that they partner with or whatever. And for the companies themselves, I think, you know, some that are well positioned are going to continue to do great. I think vendors are going to do well.

The ones that are attacking the spaces of the things that I'm suggesting we shouldn't be building. The ones we can't monetize where we're competing against MIT and millions of dollars. So, the hammers and shovels or picks and shovels, I guess, right? 

Those companies I think are going to do really well in insurance over time. There's an awful lot of them right now that are like “AI companies.” I use quotation marks since this is a [00:45:00] podcast. But there are an awful lot of them right now. And, it's maybe a bit saturated even, but, we'll see. What do you guys think?

Josh Butler: You mind if I go, Cameron? 

Cameron MacArthur: Go for it. 

Josh Butler: I totally agree with you on all that, Garrett. I mean, I think that in the MGA space, I think the opportunity here it's like a once in a generation kind of opportunity to gobble up niches as quickly as possible. And I think, I mean that's really our strategy is like, every vertical is very, very different, especially in like in our space, but I think it's true across many lines, like in Workers' Comp, like a bakery looks pretty different than even like a quick service restaurant. Both making food, trying to get it to customers, but they look pretty, their risk profiles look very different. 

So, what works in one looks pretty different from what works in another. Our ability to, our opportunity here is really first mover advantage in a series [00:46:00] of a few hundred verticals probably.

And being able to basically bring a new data source. A new set of like risk intelligence to better decision making to those verticals before it becomes sort of standard practice across the industry. And I think that we have a few years to do that probably, right? So, it really is kind of an arms race for us.

And, you know, if someone else comes along and tries to build something like what we've built and go after this market, then it'll be between us too. Yet I haven't seen anything like this across any of the traditional carriers today. And I think we'll have a 5, 10, 15 point advantage for the foreseeable future until, until someone does come along and build a sort of a replica of what we've got.

So, you know, does that mean like the whole industry gets transformed overnight? Absolutely not. It's a lot of work, a lot of blocking and tackling. To actually like win these niches one by one, [00:47:00] find the right brokers with those books who know how to come in and, you know, press on those pain points that like some customers don't even realize that they have.

And say, “you know what, we can make you sustainably more competitive than your competition out there with a, 15, 20%, 30% reduction in total cost of risk. Here's why it's worth that pain to switch over and what.” And if we can do that then we'll have locked in those niches.

And I think that's honestly, I think you're going to see generally a trend in the special commercial lines. What you're going to have winners in specific niches, just like it's going to, it's going to be replicated across the board, like winners in specific niches and sort of like the generalist carrier will become less and less of a thing. You'll just have very point solutions really that work for specific niches and just a bunch of programs, if you will, [00:48:00] gobbling up as much of that premium as possible.

Cameron MacArthur: Obviously Josh and I have this, you know, agree on that market thesis that we're going to see a ton of specialty programs in specific niches come up because our business is built on that same model. 

And yeah, I don't think that it's going to end up being traditional versus emerging. It's just going to be companies that use technology and companies that don't use technology. And we've seen. you know, I think Microsoft is the quintessential example of, they were an old, on their way out kind of company and then switched it on to aggressively invest in technology and completely shifted their whole business to be a major player.

And so, I think it's clear where the market is going and people are going to invest in technology and do that or not. Cool. All right. I think that's all we've got, guys. Great. Well thanks to PLUS for having us and I'll talk to you guys later. 

Josh Butler: Thanks guys.

PLUS Staff: Thank you for listening to this PLUS podcast. If [00:49:00] you have ideas for a future PLUS podcast, please complete the Content Idea form on the PLUS website.