
Coverage Counsel Is In
A weekly podcast for insurance professionals on interesting coverage issues.
Coverage Counsel Is In
Episode 31. AI and Insurance
AI is clearly on the rise. But before we all bow down to our new robot overlords, it’s important to question how they work in different industries.
This week, Bob dives into the use of AI in insurance coverage analyses, and where it can go wrong. From increasing cost to increasing biases, he discusses why AI is probably not replacing coverage counsel any time soon.
Have a topic you'd like Bob to cover? Submit it to questions@gpsllp.com, or connect with Bob directly on LinkedIn.
And if you'd like to know more about GPSL, check out our website.
You can also find us on LinkedIn, X, and Facebook.
I must get 15 to 20 emails a week, maybe more, extolling the virtues of using
artificial intelligence and law practice. This piqued my curiosity about insurers using
AI instead of coverage counsel for performing coverage analyses.
So I did some research. The National Association of insurance commissioners reports
that the insurance industry is using AI across all stages of the insurance life
cycle from product development, marketing, sales, and underwriting to policy servicing,
claim management, and fraud detection. For this podcast I asked chat GPT How
insurance companies use artificial intelligence to analyze whether a claim is covered?
In summary, it answered, and this is a quote, "In essence, insurance companies use
AI to streamline and enhance the claims process by improving accuracy,
reducing human error, detecting fraud, and providing faster,
more consistent decisions. AI helps insurers quickly determine whether a claim is
covered by automating policy and claim analysis, improving customer experience,
and providing real -time assessments that help both customers and insurance providers
navigate the complexities of the claims process efficiently. End quote.
That sounds like an aspirational marketing response rather than a summary of what
actually happens. So I asked ChatGPT if anyone has sued an insurer for using AI to
make a claims decision. Here's the response.
"Yes, there have been instances where individuals or groups have sued insurers over
decisions made by AI systems in the claims process, although the lawsuits specifically
targeting AI -driven decisions are still relatively rare compared to those related to
traditional claims processing. As AI continues to play a larger role in the insurance
industry, however, legal and regulatory challenges are emerging and lawsuits are
expected to increase over time. The legal basis for such lawsuits can stem from
various concerns related to AI decision -making and insurance claims,
such as discrimination, fairness, transparency, and accountability," So let's look at
these problems for a minute. AI systems can unintentionally perpetuate or amplify
biases. This can lead to discriminatory decisions. For example,
if an AI model is trained on biased historical data, it can make decisions that
unfairly disadvantage certain groups based on race, gender, age, or other protected
characteristics. In the context of insurance, this could manifest in denials of claims
or reduced payouts for certain demographic groups, unequal treatment based on zip
codes, which may correlate with race or socioeconomic status, and discriminatory risk
assessments based on factors like credit score, occupation, or marital status.
Also, AI models, especially complex ones like deep learning networks,
are often described as black boxes because their decision -making process is not
easily understandable to humans. This lack of transparency can lead to lawsuits if,
for example, a claimant is denied coverage or a claim is underpaid without anyone
understanding why or with the insurer being unable to provide a clear and justifiable
reason for its decision.
It's also characteristic of AI platforms that AI -based decision -making systems rely
heavily on vast amounts of personal and sensitive data. If insurers use data
inappropriately or without the proper consent, such as using data for purposes beyond
what was originally disclosed, say for example in the insurance application,
it could lead to legal action.
For example, if an AI system uses personal data for claims decisions without the
claimant's knowledge or consent, it could violate privacy laws.
Some AI systems may use algorithms that lead to the unfair denial of claims,
especially in complex or marginal cases. For example, AI might flag flag acclaim is
fraudulent based on patterns in the data that do not actually represent fraud,
but rather a misunderstanding or anomaly, because these AI systems don't ask questions
and don't point out that there could be alternative explanations. They just pick the
one that they think is most consistent with the large language learning patterns that
they've been trained to adopt. The National Association of Insurance Commissioners says
that AI may facilitate the insurance and the insurance industry in several ways.
For example,
and here I'm quoting, "AI may facilitate the development of innovative products,
improve consumer interface and service, simplify and automate processes,
and promote efficiency and accuracy." And I want to highlight in that quote that
something that's being touted here is that use of AI may promote accuracy.
But then the NAIC goes on to warn, and again, I'm quoting,
"AI, including AI systems, can present unique risks to consumers,
including the potential for inaccuracy, unfair discrimination,
data vulnerability, and lack of transparency and explainability." That end quote.
So, while the AI may promote accuracy, it also has the potential for inaccuracy.
The virtues of AI most often touted are speed and efficiency.
But what about accuracy? Does any AI company list accuracy as a virtue of AI,
especially in nuanced decision -making or experience and judgment play an important
role such as in coverage analysis. Chat GPT said at the bottom of its answer,
quote, "Chat GPT can make mistakes, check important info," end quote.
The disclaimer is in a smaller, lighter font than the answer and could easily be
overlooked. But in the body of its answer, chat GPT also said,
quote, "Machine learning algorithms allow AI systems to continually improve.
Over time, AI can learn from past claim decisions, both correct and incorrect,
and refine its decision -making process. This reduces errors and improves accuracy in
assessing claims." That sounds like a garbage -in -garbage -out problem to me.
Is the insurer putting in all of its claims decisions right or wrong and all the
details that went into those claims decisions? And what about the claims decisions of
other insurers in similar situations. Note that these AI platforms do not do legal
research. There are some, such as Lexis, that will help people find cases
and look at decisions and the reasoning of decisions. But they don't really analyze
it in the way that that coverage counselor claims person would analyze with
experience and judgment how the claim should be resolved or how the claim should be
further processed.
On December 2, 2023, the NAIC issued a draft Bulletin entitled "Use of Artificial
Intelligence Systems by Insurers." As of January 3,
2025, it appears that 21 jurisdictions had adopted the Bulletin and four other
jurisdictions had issued insurance -specific regulations or guidance on the use of AI
by insurers. These bulletins contain warnings for insurers,
and here I quote from the model bulletin of the NAIC, "Decisions subject to
regulatory oversight that are made by insurers using AI systems must comply with the
legal and regulatory standards that apply to those decisions, including unfair trade
practice laws. These standards require, at a minimum, the decisions made by insurers
are not inaccurate, arbitrary, capricious, or unfairly discriminatory.
Compliance with these standards is required regardless of the tools and methods
insurers use to make such decisions. However, because in the absence of proper
controls, AI has the potential to increase the risk of inaccurate arbitrary capricious
or unfairly discriminatory outcomes for consumers. It is important that insurers adopt
and implement controls specifically related to their use of AI that are designed to
mitigate the risk of adverse consumer outcomes. I'm going to stop quoting for a
minute and go back. This is an important recognition. In the absence of proper
controls, AI has the potential to increase the risk of inaccurate outcomes.
In other words, if somebody's not looking over the shoulder of the AI program,
the AI program can make a wrong coverage decision. So even with the use of AI,
it appears there needs to be some human who looks over the decision and evaluates
it. So this may signal a change from initially drafting of coverage determinations to
submitting AI -generated coverage determinations to a coverage specialist for
verification.
Now I'm going back to quote from the bulletin,
consistent therewith, in other words, consistent with the requirement that controls be
implemented to design to mitigate the risk of adverse consumer outcomes,
quote, "All insurers authorized to do business in this state are expected to develop,
implement, and maintain a written program per PEREN, and AIS program closed PEREN,
for the responsible use of AI systems that make or support decisions related to
regulated insurance practices. The AIS program should be designed to mitigate the risk
of adverse consumer outcomes, including at a minimum, the statutory provisions set
forth in section one of this bulletin. That's the Unfair Trade Practices Act and
Unfair Claims Settlement Practices Act. That's the end of the quote. So what this is
saying now is if the insurer wants to use AI to make coverage decisions,
it has to have a written programmer protocol that will certainly be discoverable,
designed to mitigate the risk of an adverse consumer outcome. That will shift
litigation over AI -generated coverage decisions to the quality of the AI platform,
the data that's included in it and creates a whole host of problems on
discoverability, analysis and scrutinization of the AI system that the insurer is
utilizing and in fact then the quality and availability of the what the insurer does
to assess the accuracy of the AI system.
That seems to me to defeat the speed and efficiency
aspects of utilizing AI and putting this back in the hands of a coverage specialist,
but with added expense of coming up with programs, data banks,
all sorts of limitations on discovery and what the AI system is looking at to make
a decision. I can just see a policyholder lawyer and their AI expert having a field
day tearing apart the AI system showing its vulnerabilities and showing Now the
insurers AIS program is inadequate and is not designed to mitigate the risk of
adverse consumer outcomes because it doesn't take into consideration all available
data. I think the insurer may be better off just using its coverage specialist,
a coverage council to make the decision in the first place, because at least with
coverage counsel, you have the advice of counsel defense that you would not have
using AI, most likely.
So with all of this as background, chatGPT predicts that,
quote, "As insurers increasingly rely on AI, particularly in claims decisions,
there will likely be more legal challenges related to discrimination, bias, lack of
transparency, and accountability. Insurers will need to ensure that their AI systems
comply with existing laws, offer transparency and decision making,
and provide recourse for consumers if they feel a claim was unfairly denied or
mishandled." Thus, in this brave new world of AI and basically automating routine
tasks through the use of artificial intelligence,
it seems like the risk to insurers, including the expense of justifying their AI
system and their protective protocols,
followed by probable litigation, is going to outweigh the utility of generating
coverage opinions by the use of AI without also the involvement of Coverage Counsel
or at least a Coverage Specialist. So that means that as Coverage Counsel,
I'm not particularly worried now about being displaced by AI,
but I am worried that some people looking for a quick efficient decision might get
into some pretty big trouble and may be exposed to some fairly large damages if
they forgo the traditional coverage analysis experience.
Well I hope you've enjoyed this first Council is in the episode of 2025 and I look
forward to 51 more over the course of the next year. As always,
please feel free to email us with any suggested topics or if you would like us to
go into greater depth on any topics in a particular episode.
So for now, this is Bob Salander signing off. By