Skip to main content
CRITICAL POINT EPISODE 50

Artificial intelligence and insurance, part 1: AI’s impact on the insurance value chain

25 September 2023

Related Content
Podcast
Artificial intelligence and insurance, part 2: Rise of the machine-learning modelsRead the articleNavigation Arrow

 

Artificial intelligence has been the buzz term of 2023, evolving at a pace unimaginable when Milliman launched this podcast five years ago. For this 50th episode of Critical Point, we gathered a group of our AI experts to discuss how the technology is poised to reshape the insurance value chain, from hiring practices and actuarial modeling to customer service and communication.

Transcript

Announcer: This podcast is intended solely for educational purposes and presents information of a general nature. It is not intended to guide or determine any specific individual situation, and persons should consult qualified professionals before taking specific action. The views expressed in this podcast are those of the speakers and not those of Milliman.

Robert Eaton: Hello, and welcome to Critical Point, brought to you by Milliman. I’m Robert Eaton. I’m a principal and consulting actuary in the life and long-term care space, and I’ll be your host today. In this episode of Critical Point, we’re going to talk about what everybody can’t stop talking about: artificial intelligence. Specifically, we’re going to kick off a series of Critical Point episodes on AI and insurance, and today we’ll look at how AI is transforming the insurance value chain.

With me are three of Milliman’s top experts on this topic. We have Hans Leida, who’s a consulting actuary focused on health insurance, and he also works on one of our risk-adjustment software products. Welcome, Hans.

Hans Leida: Thanks for having me, Robert.

Robert Eaton: We also have Tom Peplow. Tom is a principal and director of product development for Milliman’s Life Technology Solutions group. Tom led the Integrate product group for five years and is responsible for the technology strategy for Integrate at Milliman. Hi, Tom.

Tom Peplow: Hi, Robert. Thanks for having me.

Robert Eaton: And last but not least, Corey Grigg is an actuary and technical lead overseeing implementation of those Life Technology Solutions products. Corey, thanks for joining us.

Corey Grigg: Pleasure to be here.

Robert Eaton: So I want to start this series on AI and insurance and talk about just generally how technology has shaped our work in the past. We actuaries are no stranger to having improved technologies influence the daily work that we do, and that’s bled through into our work in insurance. So one question that I think we often ask ourselves is, “How do new technologies help us in our professional lives, and specifically how do they help us make decisions?”

Can artificial intelligence enhance human decision-making?

Robert Eaton: I want to flip this over to Hans Leida with a first question of how do we optimally help artificial intelligence and human decision-making work together? We’ve got new technologies, and I’m curious how we might do this work in our daily lives in a way that sort of improves decision-making. Hans, you had some thoughts on that.

Hans Leida: Yeah, thanks, Robert. So I think it’s, you know, we have been developing complicated AI models for many years to help our clients understand healthcare risks and make decisions. There are a lot of AI models used in underwriting or other applications, and yeah, like you said, the challenge, I think, is how you make those decisions and know how to use both humans and these AI tools optimally together. And that’s true for actuaries, as well as [other] professionals. You know, we have to make a lot of choices in our modeling, or often we say we apply actuarial judgment. We’re getting more sophisticated tools involving AI that can support us in making those decisions just as other tools might help other professionals. I think that it’s an interesting challenge that hasn’t really been looked at as deeply as you might like to figure this out.

You know, I think often we’re really excited to build these complicated models because that’s, at least for me as an actuary, one of the things that I really enjoy in my work. But figuring out how to take the advantages of those models and realize them in the real world is complicated. One thing we can do, I think, is look at other professions that have faced this challenge and see how it’s been going.

One place in healthcare where AI was adopted and has been implemented sort of more rapidly than many other professions is in radiology, and I saw a recent paper come out where they’re actually studying this problem. If you have a model that, you know, in aggregate is doing a better job at detecting potential cancers, say, in radiology images than the radiologists are, but when you give that information to radiologists it doesn’t always result in optimal decision-making, where optimal would mean where the radiologists were right and the model is wrong they make the right decision, and where they’re wrong and the model is right, they also make the right decision, right? It’s sort of how do you have people be appropriately critical of the model and rely on their own expertise at the right times and not at the wrong times?

Robert Eaton: That’s almost a new skill altogether, if you think about it. I think about how artificial intelligence sort of transformed the chess world back in the 1990s when Deep Blue beat Garry Kasparov. And since then, the chess-human kind of hybrid play has really picked up in chess and has been kind of something that people watch for and have watched for a while, but that’s almost a new strategy, a new behavior altogether. Maybe in the case of the radiologists, it’s going to take a new brand of radiologist to work better with artificial intelligence in those models, and perhaps that metaphor can be extended to actuaries also.

Hans Leida: Yeah. The other thing they were studying is, you know, what information, besides just the model prediction, can you give to the professional to help support that decision-making, right? Like, you know, we hear a lot of concern around black box models. There’s actually a lot you can do to unpack most models and give information about why a prediction came out the way it did. The question then is how can you communicate that to people who are using the model in a way that actually helps them make the right decision? The paper I saw didn’t suggest that they’d figured that out yet. It just was trying to measure the impact of giving that information or not in various formats to the radiologist. So I think this is still an area where more research is needed.

Robert Eaton: I think that’s fascinating and it’s certainly something that I’ll be watching also. You know, as it applies to radiology, I’m sure it’ll apply to many other fields both in medicine and in practice there and then also other professions.

How will artificial intelligence influence actuarial modeling?

Robert Eaton: Something that’s kind of near and dear to our heart is actuarial modeling. You know, Hans, you mentioned that just now. I want to go to Corey, who works a lot with implementing actuarial models and processes. Corey, tell me how you see artificial intelligence really impacting actuarial modeling, and give me some of your reflections there.

Corey Grigg: Sure. Yeah, so where I think I see a lot of use for artificial intelligence, particularly generative AI in our actual modeling work, is around modernization of actuarial systems, which is sort of a big area of my focus. What we have floating around out there in industry, in many cases, are a lot of legacy systems, legacy code, and legacy processes. You know, a bunch of Excel workbooks that were strung together to do some post-processing of actuarial evaluation or input preparation, and where AI really can help here is in accelerating the pace at which actuaries can convert those legacy codes into something more modern.

And to give an example, I’ve been conducting an experiment on my own with some of our interns here. Previously working with interns, they’re smart, smart kids, fresh out of college or still in it, and you’ve kind of got to spoon-feed them the specifications for what you want done when they’re writing code, actuarial code for the first time. And I would spend a lot of time explaining, “Well, what is this concept?” You know, we’re doing some type of extrapolation, yield curve development, and junior staff haven’t necessarily been exposed to that in their education at this point, and contrast that with something like ChatGPT, where the context of technical terms is easily, apparently, understood by ChatGPT. You don’t have to explain what Smith-Wilson smoothing is to ChatGPT. It will understand that concept. So what I’ve been doing on occasion is, let’s have the generative AI take my specifications, write some sample code, hand that off to a junior person to read through, understand, and clean up, test, and so we’re still having that same level of review and we know that the code that’s being implemented is correct, we’ve tested it. But the speed at which we can build out that and write it all is much, much faster than a human doing it. So that’s really an opportunity to lower the cost barrier for recoding things or creating new code in different languages, more modern languages, than we experienced in the past, where someone really needs to understand the original source language and the new language and the actuarial piece, the math behind everything as well. I think that now can be divided up among more people, or AI can fill the gap in some places of an actuary’s knowledge of particular coding language.

Robert Eaton: Yeah, it seems like there’s a whole lot of low-hanging fruit in a lot of those areas you mentioned, Corey. This reminds me-- I heard the venture capitalist Marc Andreessen discussing how knowledge can be divided into kind of two camps: fluid knowledge, which is the ability to, you know, he said assimilate and synthesize things, and so you mentioned smart interns. They have a lot of fluid knowledge, right. They’re towards the end of their college career probably and really, really sharp. And then there’s something like crystallized knowledge, which going back to the Smith-Wilson smoothing. (I don’t know what that is either, by the way.) So, you know, those are going to be tasks where something like a GPT 4 or, you know, in the future, GPT 7 and 8, are going to have just tons of that. They’re going to have the crystallized knowledge in spades. They can draw from trillions of past writings. I really am optimistic like you are, I think, for a lot of this assistance on the technical side to help us with these models.

Within insurance companies, how will AI affect IT?

Robert Eaton: I’m wondering if I can turn to Tom Peplow real quick. Can you talk to me a little bit about how generative AI or AI in general, within insurance, how does this technology stand to impact the IT teams within these companies? You know, these are client companies of ours or other insurance companies globally.

Tom Peplow: Yeah, it’s going to be interesting times ahead, I think. Like Corey explained, is we’re already starting to pick it up in some places internally here, so I think part of the challenge I have for the strategy of our products is thinking about how we position them so Corey’s able to do that much more smoothly than he is today. But like Hans said, AI’s been used for a long while in certain applications, but those applications are specific and the expertise that go into building those applications are also quite not widespread in the business. So you’re moving from having some really bright data scientists and software engineers building some really complex models to solve some really interesting problems to it being available to everybody in the business to use. And that’s a very quick transition from where we are today to where we’re going to be in the future.

You can see the upside potential of this with Microsoft's announcement about how they were going to charge people for a tailored ChatGPT just for their business without all the worry of data leaking out on the internet and what questions you’re asking and intellectual property becoming available in a generalized model that everyone on the internet can use. So the vendors are moving faster than the IT teams are, or even, you know, we’re able to move.

So I think step one is to try and think about what’s the strategy for the big players in the AI market, Google, Facebook, Microsoft, Databricks, those types of people. They’re looking to generalize and solve these problems at scale and sell them to everybody in the world, and that’s just going to make it very, very easy to do some of the things that Corey said. But to do that at scale, you’ve got to make sure you’ve got a data foundation in place that includes all of that specific insight that your business has, so those interns are not just learning how to solve these specific problems, this crystallized knowledge in the world of, you know, everything, but also the crystallized knowledge of an organization over, in Milliman’s case, 75 years of doing business. And that’s a huge competitive advantage to any business. If they can kind of weaponize all of that work they’ve done in the past and put it in the hand of an intern who can just get going, that’s massive, right? You can’t afford to be slow on that adoption, because if you are, other people will just be considerably better than you. That’s it. And then that’s a risk to your ability to compete in the marketplace when everybody else is ahead of you.

What I’m worried about is how do you not get in the way of this revolution that’s coming? I started my career in the year 2000, and Google was just a thing that was starting. No one in my business stopped me using Google, but there were absolutely people around our company who were picking up books and showing me how to find an algorithm from the index page, and I was like, “I don’t want to do that. I’ll just type it into Google, thanks.” But there was no barrier then, right? Google wasn’t a thing that was blocked in the dot-com startup I worked at. But people are approaching generative AI as something that they maybe should lock down as opposed to embrace, and that gives me some concern. But I think the big vendors are doing what they can to make it safe for people to use, and then it comes down to every company in the world trying to figure out how they make their data available to this in a safe way. And if you crack that nut, I think it’s going to have some really big upsides for businesses.

Artificial intelligence and insurers’ institutional knowledge

Robert Eaton: So it seems like it worked out for you in the early Googling. I love this point, by the way, that you made about how all of the knowledge that your business has gained over the past decades can be kind of accrued and maybe agglomerated into a company-specific model, a wealth of information that everyone has available to them. I think about this book, one of these business books that was recommended to me, “The First 90 Days,” which talks about a lot of critical things that someone who’s starting new at a company can do within their first 90 days, and one of the things that the book recommends is sort of find somebody that’s been around for a while, someone with a lot of legacy information, and essentially try to soak up as much as you can. Try to understand how the state of things in the company today came to be so that you can better help the company in the future. And imagine now, you know, everybody having insurance company ABC’s GPT, which has in it like all of the past evaluation reports, all of the past earning calls, and some idea on how to kind of answer questions from all of them. It’s really a powerful, powerful opportunity.

Hans Leida: This reminds me, Robert, of one of my partners, who’s since retired, Leigh Wachenheim. One of her amazing talents I guess that I always admired was to remember details from projects that we’d done many years previously. So we’d get a new client in the door, and they’d have a question and she’d say, “Oh, I think we did this before for this client.” And you’d go back--and back then it was, you know, you pulled a box out of storage to get the paper files--and sure enough, you know, here’s the same problem we solved before. Having even like a personal, you know, AI copilot or assistant that could help me with— you know, I don’t have quite the memory that Leigh does. I rely heavily on being able to search my email. If I could easily and in an even more intelligent fashion search the work that I’ve done over the course of my career, it would make me incredibly more effective in every part of my job.

Corey Grigg: I think that kind of ties together what Tom was saying about Google in 2000, right? A skill you needed to learn was how to separate the garbage from the gold out there on the internet, and all that stuff in the box that we saved, now that’s gold, and, you know, giving that to the AI, you’re just increasing the chance you’re going to come up with gold when you ask ChatGPT for the answer. And going back to the radiology paper, too, it’s a similar thing, right? How do you differentiate the garbage from the gold when you have an AI model telling you the answer and you don’t really know where all that input or output came from? So yeah, I think that’s something people will develop, right? Just like you get better at being a search investigator and digging through Google and finding the right answer. People will be able to discern whether their generative AI is giving them useful output or not or be able to ask the right questions to help guide it in the right direction to make sure they get useful output.

How AI could change how insurance companies hire

Robert Eaton: This makes me kind of feel that those of us in insurance companies who are able to use maybe some of the higher-level skills of discernment for model output or process output, now that those processes are perhaps more automated or are being done more in part by machines, does this actually in some ways increase the demand for some of those higher-level skills? I know some of the actuarial organizations refer to it as kind of the EQ and the AQ, the adaptability. I tend to think it does. I tend to think that we should kind of hone and develop those skills that allow us to ask the right questions of certain of these models, and in ways we’ve been doing this for the last three or four decades, right, as technologies have produced better and better software. But how does that change our, you know, the hiring and recruiting process? You know, Corey, will your interns tomorrow look the same as they did yesterday? And just curious to hear how you all think this will kind of flow through to our need for some of these higher-level skills.

Hans Leida: Yeah, I can start there. I think, you know, right now, when I hire people, I look for the ability to learn things rapidly more than anything else. Like a demonstrated ability to learn, to think about things, to ask good questions. I love it if people come in the door with some skills already under their belt, but the work that we do in my shop as consultants and in the products that I help manage is all about keeping up with a changing world. So the technology that anybody knows when they come in the door is not the one they’re going to be using in a few years in our work with clients. So that, I don’t think, is going to change, but I do think it will change the way that people move through their career path. I think that those that can learn to use these tools to their advantage will be able to be more productive more quickly.

I think the other needed ingredient, though, is really somebody who has a healthy dose of skepticism when it comes to using artificial intelligence tools to do their work. You know, I think even right now, you know, I’m looking for somebody who accepts feedback on their work, who checks it over once they do their first draft, and looks to make sure it makes sense. I once, when I was in grad school, I had an assignment from a professor on some research we were working on, and I went and used a computer algebra package to do it. He would do this longhand, all these complicated calculations, when he was on the plane going to conferences, and I came back proudly because I had calculated it really quickly compared to how he would have, and he took one glance at it and said, “Well, you messed up, because these should be whole numbers, not fractions,” and he was right. I didn’t understand the problem well enough, and if I had, I would’ve known immediately that my slick computer answer was wrong. So I think we still need a human element like that, and we need to develop those skills in the next generation to make use of the tools to produce more work quickly, but make sure it’s quality work, make sure we still have those sanity checks built in.

Tom Peplow: Yeah, I can build on that a little bit, Hans. I’ve always looked similarly for people who have learned to learn, and I do remember, I can’t remember who said this, but it was someone quite smart, talking about the education system’s primary job is not to create professors but to create people who can learn. And the pace of change in technology is just accelerating. It’s already a hockey stick and it’s just going to get steeper with AI coming out. So the ability to learn is even more important because, you know, past performance is not a predictor of future success, and who knows what’s coming down the pike. So if you’ve got an adaptable workforce who are enthusiastic about change, who can embrace it and figure out how to leverage it to their advantage, then it’s only good for the business.

One of the interesting things I think that’s going to start to happen, though, is we acknowledge, I think all of us, that learning how to be a good prompt engineer is probably going to be quite an important skill, and that’s why I think it’s critical that education systems don’t ban the use of generative AI. Because if you’re not learning to use it, you’re not learning to learn as effectively as you could. Because one of the things that it’s absolutely doing, and as Corey mentioned, as I found out too, is it’s just a really good learning aid if you have a critical mind. I think one of the things that we’ve established at Milliman is the peer-review process helps you kind of find the gaps in your work—like you said, Hans, the should’ve been whole numbers thing. As long as you’ve got someone who’s also your set of guardrails, encouraging people to adopt this technology to accelerate their learning, it’s a good thing, and I see the professors at universities’ job is the guardrails around the students, to help them leverage the technology correctly. We absolutely don’t want people who are just copying and pasting answers from the internet, and there’s probably some people who are passed because of the ability to do that whilst this technology’s starting to be understood by everybody, but we want them to be able to use it to be able to get their answers as good as possible and start to dream up new things.

Because if you think about what we want, it is people at the top who can evaluate and come up with new ideas and create new things, and these algorithms can’t do that. They need us to do that. They need people to do that. And to Robert’s point, we also need people who can communicate those ideas clearly with others so that they can see the value and help them become adopted. So I think we’re going to be doing less of the lower cognitive work and more of the higher cognitive work, but we can’t do the higher stuff without knowing the lower stuff, and I think a great way to learn the lower stuff faster and therefore increase your capacity to do the harder stuff is to use tools like this.

How AI could transform an insurer’s customer service

Robert Eaton: I want to close by asking you all kind of an open-ended question on the general insurance value chain from inception, when a customer is first introduced to the concept of the insurance company, all the way through the binding, the delivering of the policy, through policy administration, and finally to claim. I’d like you all to maybe provide some comments on where else you think some of the AI that we’re seeing coming down the pike is going to influence the rest of these components of the value chain.

One of the things that comes right to mind for me is in customer service, where I think that customer service learning from past conversations with insurance customers and helping use all of that past crystallized knowledge to better inform the next customer service representative who’s helping somebody with one of their insurance policy questions, whether that be benefit eligibility, whether that be how to go about making that claim, or anything else related to that. I think customer service, you know, both in insurance and probably in so many other sectors really stands to be transformed here in a really positive way for consumers. But I’m curious if you all have any other thoughts around kind of the rest of the insurance value chain and where else outside of these technical realms that we focus on most of the time, where else do you see the ecosystem kind of being moved with the latest round of AI?

Tom Peplow: I’ll go first as the person who knows least about insurance. <laughter> And I think that’s what’s so fundamentally cool about this tech is: I don’t understand insurance. Buying insurance is difficult. So if something can help me understand what I’m signing. So you’ve got this huge policy document with all these things and that I can better understand, that’s great. And then later on, probably several, hopefully several, years after signing this, I need to use it because something bad has happened to me, being able to understand how I can use that thing that I signed many years ago to help me out when I’m really stressed in a bad situation is huge. So I think for the consumers, understandability, and maybe we’ll be able to move away from this kind of concept where insurance is sold but not bought because the newer generation of youngsters are going to like to find things out for themselves, and they don’t really trust salespeople. So if we can move it to a point where insurance understandability is now something that Tom can understand without having it be explained to him, it will really help people get more coverage on risks that they need covering.

Robert Eaton: That’s fascinating. So you mean something like if I have a personal assistant at home, right? Like Amazon now has one of those little robots that’ll follow you around. I’m not in for this just yet, but you could imagine that thing sort of having a personal financial and product history of you or us, and if you go to the hospital, say, for a critical illness, you know, if you’re diagnosed with cancer, they might remind you that you have this insurance policy and how to go about claiming those benefits. I think that’s fascinating, Tom. I like that.

Tom Peplow: Mm-hm.

Hans Leida: Yeah, and I think helping to meet people where they’re at, as well. You know, I think a lot of people face barriers to understanding their insurance coverage that might have to do with, you know, something as simple as translating it into different languages or helping people who haven’t got a mathematical background or a background to understand the financials. Some health insurance policies get extremely complicated, right, in understanding what copay or coinsurance or deductible applies to what service, what’s covered. So I think providing something that bridges that gap for different types of insured people could really move the needle on people making use of their insurance and accessing the benefits that they’re eligible to use.

Robert Eaton: That’s such a great point, Hans. I think about how complex-- You mentioned health insurance policies. In my work, often life annuity, long-term care policies, they’re tens of pages long, and they use terms that I’m, you know, I look at once and as a customer I’m never going to use again. But imagine that policy coming with a little explainer bot you could just ask questions to that just tells you about that, and like you said, in many languages. Yeah, there’s a lot of great, great opportunities here. Corey, did you have anything you’d like to round us off with on insurance and the value chain?

Artificial intelligence could enhance communication within insurers

Corey Grigg: I think the communication to policyholders is one end, but there’s also communication up, right? Actuaries need to explain the results to senior management, to other divisions within an insurance company. We’ve seen out in the wild that doctors are using generative AI to help communicate with their patients more compassionately. And I think there’s probably an untapped opportunity there for actuaries to hone their communication and explanation of really technical results up the chain as well, to investors and to the board, and other stakeholders that actuaries need to communicate with.

Robert Eaton: Yeah, that’s a terrific point. I think it’s often overstated that, you know, people tend to think actuaries may not have great communication skills. I think that we actually do have quite good communication skills. Corey, to your point, though, even with good communication skills, having an extra hand in kind of talking through some of the implications of our work to senior management is going to go a long way to hopefully add clarity to the businesses we do, so yeah, I really like that.

Hans Leida: Yeah, actuaries are often uniquely situated within insurance companies and other organizations because the nature of our work requires us to get information from and communicate with a lot of different areas within a company. I saw a colleague of mine once present at an industry meeting, and he’d analyzed email within an organization and tried to figure out what were the connections between people based on who emailed each other. And he demonstrated actuaries actually were among the most connected individuals within a lot of organizations that he studied. And I think that gives us a unique perspective and maybe a unique place in helping insurance organizations make the best use of these tools.

Also, you know, we have to, as you said, we have to do the math, understand the quantitative side of things. We often are in the weeds with the code, and then we’ve got go and explain it to people who don’t speak that language. So that’s maybe another way that we can play a part here is, you know, we hopefully will understand something about what these tools are doing and their strengths and weaknesses. Maybe we can help colleagues in other parts of the organization understand what is and isn’t possible with them when they try to use them in their work.

Robert Eaton: This has been a fascinating conversation. Thank you, Corey, Tom, and Hans for joining me. For those of you listening, to learn more about Milliman’s AI and insurance expertise, visit milliman.com.


We’re here to help