In our second Critical Point episode about artificial intelligence (AI) applications in insurance, we drill down into the topic of machine learning and particularly its evolving uses in healthcare. Milliman Principal and Consulting Actuary Robert Eaton leads a conversation with fellow data science leaders about the models they use, the challenges of data accessibility and quality, and working with regulators to ensure fairness. They also pick sides in the great debate of Team Stochastic Parrot versus Team Sparks AGI.
Transcript
Announcer: This podcast is intended solely for educational purposes and presents information of a general nature. It is not intended to guide or determine any specific individual situation, and persons should consult qualified professionals before taking specific action. The views expressed in this podcast are those of the speakers and not those of Milliman.
Robert Eaton: Hello and welcome to Critical Point brought to you by Milliman. I'm Robert Eaton. I'm a principal and consulting actuary working in the life and long-term care space, and I’ll be your host today. In this episode of Critical Point, we're going to continue our discussion on artificial intelligence and insurance, and today we're getting into the weeds. We're looking at all the forms of machine learning used in the insurance industry. So with me today are three of Milliman's top experts on this subject, all focused on the healthcare space.
I'd like to introduce you first to Mike Niemerg. Mike is the director of data science for our IntelliScript Practice. Mike, welcome.
Michael Niemerg: Hello.
Robert Eaton: Next we have Joe Long. Joe Long is a consulting actuary and data scientist. Hi, Joe.
Joe Long: Hi. Great to be here.
Robert Eaton: Last but not least, we have Anne Lin. Anne is a pharmacy data science director. Hi, Anne.
Anne Lin: Hi. Glad to be here.
Unique challenges with machine learning in insurance
Robert Eaton: Thanks to all three of you for joining me today. So let's jump right in. I want to hear from you all on what's kind of unique to the insurance industry and what special challenges you see when doing machine learning, kind of specific to our business. Mike, I'd like to kind of toss it over to you for some thoughts.
Michael Niemerg: Yes. Thanks, Robert. So in the insurance industry—and more specifically in the area where I focus, which is primarily underwriting and pricing—I think one of the challenges that we see a lot of, that you don’t see in other areas, is the distribution of data. So when you think about healthcare data, and it’s particularly egregious in the stop-loss space, you’re going to have very, very large claims, but they’re also going to be very rare. The vast majority of people simply aren’t going to have claims in a given year.
So when you’re going through the modeling process, you have to deal with this fact that you might see particular conditions or diagnoses that only impact one out of every 10,000, 100,000, 1 million people, et cetera, so they can end up being very, vary rare. But when you do see an observation it’s more or less all important, it kind of masks everything else that’s going on, so kind of getting it right for that one in 10 million, one in 1 million event is going to be very important and it’s going to be crucial to actually coming up with the correct pricing model. So, to truly understand that risk, then, I need to understand both the frequency of events but also their likely size.
And then I need to understand the impact that features such as rare diagnoses are going to have on my outcome and adjust for them in a sensible way. An example of this could be amyotrophic lateral sclerosis (ALS, known as Lou Gehrig’s disease), where I’m only going to see a small number of codes in a given year of people that are going to be diagnosed with this. I might not actually be able to develop a robust estimate of the cost just using statistics or machine learning, so I’m actually going to need to bring in clinical knowledge or actuarial knowledge to help supplement that within my model.
Joe Long: Yeah, and I have another thing to add there. Specifically, to the models that are used for long-term duration projections like in life insurance or long-term care (LTC) insurance. A lot of times when we build models for reserving, we use some very simple high-level variables that are based off of, like, age, gender, policy characteristics, and whatnot, and the models aren't really focused on getting the most, best predictive accuracy with all the different variables we can get in the universe. The main goal is to be able to project assumptions out 40 years. So each use case has different types of modeling techniques we'll use or different variables. Now if you're switching towards more near-term things like predicting healthcare costs in the next year, then you can start using additional variables that are more near-term predictions rather than long-term.
Digitizing healthcare data: The impact on accessibility and quality
Robert Eaton: You all make some really great points. I want to kind of flip over to Anne. Anne, you and I spoke the other day about some of the challenges that you see in the insurance space, mainly on kind of the digitization that we've seen over the recent history and what challenges that brings to accessibility and to quality. Do you want to talk a little bit more about that data?
Anne Lin: Absolutely. After spending close to a decade in the retail healthcare space, this influx of data will be coming from accelerated digitization, which really refers to the explosion of data from connected devices. So think of new technologies that are coming out, that are being deployed in your local businesses, your homes, vehicles, as well as smartwatches on our person. This all results in an avalanche of data that creates the potential for insurance carriers to understand their members more deeply, but it also highlights the need for the technology infrastructure to evolve just as quickly. So that would become the standard approach for processing large and complex data streams, which would be generated by all these real-time insurance products that are ultimately tied to an individual’s behavior and activities.
Another point that you brought up was data accessibility. Currently, it’s less likely the norm, but there does exist insufficient data organization and infrastructure that is able to cope with just getting all this data on one interoperable platform. Usually we find legacy systems that are quite disparate and are connecting data streams coming from different sources with different formats and what this does result in are data silos, which is another complication for trying to perform any efficient training and usability of all these models and algorithms that we are trying to build on top of these datasets.
One thing I’d like to call out: When a model is trained with large amounts of data, there does exist a greater propensity for including noise and sparsity, which if left unchecked could lead to poor model performance, classification accuracy, prediction, et cetera. One way to address this would be to leverage ensemble-based techniques to reduce the spread or dispersion of the predictions. Or, alternatively, polishing techniques can also come in handy in improving classification accuracy.
And last but not least, absolutely be strict on conducting your robust checks during your data pre-processing to detect any anomalies and outliers before modeling.
Robert Eaton: It's a really good point. When you mention data accessibility, Anne, something that kind of comes to my mind is a lot of our clients at Milliman, within the insurance industry, they didn't really maybe start out as tech companies where collecting data was their number one responsibility in order to create better predictions. And what we're talking about here with machine learning is helping to create models and probably predictions somewhere. And so I think as a result of patching on a lot of technology and data updates on kind of legacy systems and even legacy businesses for many years, we end up, like you said, there's data in silos here and there, so the accessibility really suffers. Maybe if you compare that to a company that's maybe started in the last five years, you probably have a different situation. And you know, they may be kind of data first or data forward. Does any of that ring true for you?
Anne Lin: Oh, absolutely. Especially your former point, which is organizations that have legacy systems and then kind of have tried to make the migration over to newer, more up-to-date systems, it doesn't always necessarily mean that all the data and the way the data is being captured gets migrated at the same time. So what you're kind of left with is this hybrid situation where you're dealing with data from the legacy systems that probably has slightly different definitions and categorization than data that's been captured on, you know, the newer systems as well as the processing capabilities. So definitely, definitely a current problem today.
The influence of regulators on machine learning in insurance
Robert Eaton: Yeah. I'm sure that many folks who are in insurance may be listening to this podcast, health or life or annuities or pension, are probably recognizing some of these exact same issues.
Something else that I think is probably somewhat unique to our situation, at least in one regard, would be the presence of regulation and how regulation bears on the sort of work that we can do. I know, Joe, I think of our work in the long-term care space, how regulators are super keen to understand a lot of the inputs and the uses of the data we have in our predictive models, in our machine learning uses. And I wonder if anybody would like to chat about how regulation kind of influences the work that you do, in particular in machine learning?
Anne Lin: Yeah, I could go first. So regulators usually, for review, they require a transparent method enabled to review these models, be able to perform quality assurance and ensure that these models are reproducible and that they are valid and can be deployed in a safe manner and that those results are consistent every time. So one example is, you know, determining the traceability of a score, which is very similar to the rating factor derivations that are used today with your regression-based coefficients. Regulators also need to be able to verify that, you know, the data usage is appropriate, data is being captured appropriately. You know, there's so much PHI protected health information that's out there today, and so being able to assess the data that's being input into these models or any such combinations are within the approved bounds.
Bias, health equity, and machine learning
Michael Niemerg: So, yeah, I've been thinking a lot about regulation in the last few years, and my work spans the health and life insurance industries, and definitely the issue of fairness has been coming up quite consistently over the last few years. There's been some regulatory movement there but not tons of clarity. So for our part, a lot of the analysis and work that has been done is really to figure out for the models that we deploy, how should we measure and track fairness measures; how do we determine if our model is biased against any particular demographic subset; and then sharing that sort of testing with our clients and supporting them in their analysis.
Joe Long: And then I'd like to add some additional color to this. The insurance industry to me is very interesting in that you can develop a model for one specific use case and then, once people hear about it, they might want to try to apply it for another use case. And that's where things can get risky when it comes to health equity issues and being fair to different demographics. And in a lot of the work that we do, we'll build a model for a specific purpose. Let's think about in a care management outreach program, we'll build that model for it to be used for identifying someone who might be at risk of having a claim in the future. Before we release that model, we'll then go through and test it to make sure it's fair to different subgroups of the population, make sure there's not one being more disadvantaged than the other one by the output of the model. And that's one thing that's really unique about the insurance industry. All the different models we can develop are used for a lot of different use cases. They could be used for pricing. They could be used for wellness outreach. They could potentially be used for marketing other insurance products. And each time you build the model, you have to have that in consideration and have caveats about its intended use and what the models shouldn't be used for. So that's very big and different than just trying to sell products on an Amazon website or products that really don't impact the individual if they don't have an equal need of getting that care or support.
Robert Eaton: It's a great point, right? If I'm listening to the Beatles on Spotify and hit “play me songs like this for the next hour,” if they kind of get it wrong and play Talking Heads instead of the Rolling Stones, I'm not going to lose my mind. But, you know, if you're here kind of making a recommendation on a disease management program to somebody, we have a kind of a moral or ethical sense that that might have a, have a stronger implication, right, to our fellow person. So, I think it is something. It's a sort of responsibility that we have within our industry to make sure we have a lot of these quality risk management measures in place.
Types of machine learning used in healthcare
Robert Eaton: So let's take the topic of machine learning, which is our specific application of what people have been calling artificial intelligence, a great kind of suitcase word. Within machine learning even, let me double-click on this for just a minute and ask you all about some of your particular algorithms and methods. So, what comes up in your specific lines of work? Can you describe just briefly for our audience who may want to know just a little bit about what type of machine learning you're using? You know, how you use that in your work? Maybe I'll turn it over to Mike and then to Anne.
Michael Niemerg: Yeah, so I think, for the most part, I use a lot of methods that most people would be pretty familiar with in other industries as well. Some of the more interesting stuff we do get into since we do some work in life insurance, looking at survival analysis, that's also particularly relevant in clinical trials and stuff like that. For the healthcare space, an issue that we've been diving into more recently was distributional analysis, in the sense that, instead of just trying to predict what someone's average cost is going to be, can we predict an entire distribution of costs for an individual or group of individuals? And then another thing that sets some of the work I do apart from maybe more traditional machine learning is just some of the model performance metrics we look at, just due to the fact that models I apply are often applied on top of either life insurance or health insurance underwriting or actuarial models. So you're kind of putting a model on top of another model. That has some particular considerations for it. So we also measure our results slightly differently. So some metrics that we look at that maybe aren't quite as common as we look at like Gini values, Lorenz curves. But also we do a lot of trying to take our model's predictions and the predictive power associated with that and try to actually convert that into financial returns on investment (ROIs) that actuaries and underwriters can understand.
Anne Lin: I would echo Mike. I find Gini curves in particular are really great in terms of interpretability and explainability for those folks who aren't really well-versed in the machine learning language and interpretive coefficients, so to speak. And so coming from more of the healthcare space, I've seen lots of probabilistic risk stratification models that are really used to assess more on the clinical side a member's risk. On the other hand, also seeing random forests come up, especially more tailored for your insurance in terms of insolvency prediction and assessing the financial viability of the contracts.
Machine learning models used in long-term care
Robert Eaton: Thank you both for that. I'm hearing that some of the uses are kind of varied across life and health, and Joe, maybe can you talk a little bit specifically about some of what we do and some of the work that you've led in our long-term care advanced risk analytics practice and maybe some of the models that we use most there?
Joe Long: Yeah. So, the models that we typically use for long-term care modeling—I mean, Michael mentioned some things like survival models. Those are used a lot because we're dealing with the probability that people will go off claim or the probability that individuals will die. We also look into building models for actually predicting if someone's going to have a claim in the next year or not. Well, there's a lot of different models you can dive for. The ones that we found to work the best are kind of tree-based models or gradient-boosting machines that are an ensemble of decision trees that can automatically interact different variables in the model together, capture different nonlinearities to really help get accurate predictions. We've investigated using neural nets on these types of problems and also problems in the healthcare space. And, typically with the type of tabular data that we're feeding into the models, they usually don't have better performance than a gradient-boosting machine (GBM) and sometimes GBMs even have better performance than the neural network models. So that’s one thing that in the insurance industry you tend to see when you’re using tabular data, gradient-boosting machines or generalized linear models (GLMs) are used a lot and would even [apply] to other areas that have more unstructured data. That's when you'll see the neural net or deep learning models that are used a lot more.
Data quality and other hurdles in insurance’s development models
Robert Eaton: Yeah, I think it's interesting to talk about some of the details here to give some of our audience a sense for what's under the hood. I'd like to maybe talk a little more high-level. What sort of hurdles do we face when we use any of these models, presumably somebody coming out of a college training with Python and a list of packages couldn't just whip up and prepare these, and it's taken kind of a lot of work over many years to get our models to where they are. Joe, what are some of the hurdles that you currently face today in our long-term care and other kind of health work?
Joe Long: Yeah. So, the biggest hurdle in development models is data quality and making sure you're using the data correctly. Anyone freshly out of school can easily run some auto machine learning (ML) on a dataset and get out some results. But the big part is making sure the data that you're using to train the model is really well cleaned and staged and understood. There's a lot of times that people come in and don’t totally understand how the data was collected, the limitations about the data. So it's really important to understand any issues that could have happened in data collection. That's like one of the major hurdles, is just really understanding how your data was collected and populated and what values are actually errors in the data, different things like that.
One of the other big challenges is, for a lot of these models that we build, we take industry data from a lot of different insurance carriers and then aggregate them together to train one model on all these different data sources. And across carriers things might be coded differently. So you have to go through and, for each different carrier, you might transform a field a different way to make it normalized across all the different carriers before actually combining the data and training the models. So that's one of the major hurdles, is just making sure that you understand the differences between all the different data sources you're collecting, and figuring out a way to make them homogeneous before you train a model.
Robert Eaton: It's a terrific point. Oftentimes I ask myself when a new dataset kind of shows up in front of me and, Joe, sometimes it's the dataset that you've helped kind of cultivate, it looks really nice and I ask myself, "What do I think this dataset represents? How do I think all of this kind of came to be and appear before me? What are the different sources that produced each element of data?" Because they're often a variety of sources and by the time it got to me, you know, 12 people have touched it and, in some various ways, either the data itself is a segment of a larger dataset or certain fields are missing or certain fields were entered manually. So, you know, to your point even, that's so much more than half the battle, is kind of getting that clean data. It's also a point that Anne and Mike have iterated also on this podcast already. Mike, Anne, any other comments on, you know, what kind of hurdles you face in your work today that we haven't already discussed?
Anne Lin: Yeah, I mean, echoing Joe's points, I think definitely having that subject matter expertise in terms of being able to interpret the data that you're using really comes a long way. I mean, that should really be kind of one of the first flags that come up when you derive some metrics from the data that you're using, and you already have from experience and understanding of what those metrics should turn out to be. That really presents as one of the first checks that anyone should make before starting any analyses or developing any models.
One other hurdle that I've found is, considering how large and complex models can grow today, there really exists an increase in variability from the interpretability of different data definitions, features, and nuances that we're finding in the data and results. And most of the time that doesn't mean that, just because there's a difference that those models are wrong. It really just highlights the need for standardization in terms of these definitions, features, nuances across the board to ensure that these models are valid and that they can be deployed in a reproducible manner outside of the environment that they were originally developed in.
What should employees expect when they begin using machine learning and predictive modeling?
Robert Eaton: So, you know, Mike, kind of digging into the idea we discussed a little earlier, you know, maybe for people just getting into this field, what sort of challenges might they face in machine learning and predictive modeling, maybe that you've seen in your work at IntelliScript?
Michael Niemerg: Yeah. I think with respect to working in the insurance industry, like all industries, there's a lot of jargon, but especially working in the actuarial underwriting space, there is a lot of just business knowledge. And so even just things like calculating actuarial per member per month metrics (PMPMs) and unit costs and thinking in terms that an actuary would, in terms of frequency and severity, that doesn't always necessarily come naturally to data scientists. Or thinking in terms of things like, oh, you know, this claim is not completed; or this block of claims is not completed. I need to make sure I add incurred but not reported (IBNR) so that the results make sense. Those are the sorts of things that actuaries, you know, all intuitively understand, but to data scientists it's a really foreign concept.
And even thinking about model evaluation and how you evaluate models. A lot of grad school students will come out of a data science program and they're kind of used to like minimizing the mean absolute error (MAE). Well, if you're working on a stop-loss model and your goal is to minimize MAE, your model is probably going to be junk, just to be honest, because that's not a very good metric to be using. So you need to be thinking more holistically and thinking about a whole suite of metrics that you should be looking at to validate your models in terms of risk stratification and ranking.
Robert Eaton: It's a great point that the mindset that we've developed, you know, as kind of seasoned modelers, or I'll say as you all have developed as kind of seasoned modelers, is not one that is easy to grasp on day 1 and maybe an actuarial mindset or kind of an insurance-based mindset. That's a terrific point, Mike.
How will machine learning change insurance careers in the next five years?
Robert Eaton: Maybe let's move on to the last topic I'd like to cover, which is, you know, your jobs. All three of you, your jobs have changed over the last five or 10 years, not just as you've become a more mature contributor to your company and your clients, but also the fact that data size and compute and data availability has all kind of accelerated and evolved; Anne, to your point on the kind of acceleration of the digitization. But talk to me about the next, like, three to five years. Where do you think your job description might lead? What are you going to be doing in the near future? I mean, I ask this in the wake of the generative AI language model revolution that we've seen through OpenAI and all of the kind of hubbub that's come across with the big AI companies. I’m going to start with Mike Niemerg. Mike?
Michael Niemerg: Yeah. So, some of the main themes that we're grappling with recently, and it's, actually, probably been grappling with for the last few years, one is interpretability. People that use models, they just want to know as much information and context as possible as you can give them. Why did the model come to this conclusion? What facts was it looking at? What facts did it not have available to it? You know, should I go out and seek additional information or should I just trust the model as is?
Fairness has been a huge topic. I think we touched upon that a little bit earlier, but that's something that I don't think is going away anytime soon. I think all consultants, we're going to have to be advising our clients on topics of fairness. What are the sorts of things that they should be looking at? How should they be thinking about this issue?
And then lastly, large language models (LLMs). Yeah, our industry, we're going to be affected just like everyone else. LLMs are going to be in all sorts of places in the coming years. So, in my particular case, we're really interested in looking at natural language processing beyond just LLMs, but how do we apply that sort of technology to looking at unstructured data like electronic health records (EHRs) and interpreting that in terms of, how do we get all the relevant underwriting context out of the unstructured data? How do we get all the relevant fields out of it to either aid underwriters or as inputs into further predictive models?
Robert Eaton: That's terrific. Anne, tell me about your future work.
Anne Lin: Yeah, absolutely. Just tagging on to what Mike said, I think coupled with OpenAI's onslaught of ChatGPT, you're definitely going to be seeing a lot more of your neural language models. I think that's just the first step towards better understanding the meaning of products, the contracts and customers, and just getting that much better at offering more personalized as well as diversified customer offerings across the board from an insurance standpoint.
I would also like to touch on, just from a processing perspective, definitely we will see a huge advantage in the claims management process, with all this increased automation usage of machine learning towards automating the processing of large volumes of structured and unstructured data as well as bridging them together. I want to say up to easily 30%, 50%, of a knowledge worker's day is really spent searching for information required to complete a job. And I know, with ChatGPT, folks are using that to do your homework, to read the contracts, et cetera. But I think that's going to start becoming more the norm across organizations, especially with classifying and extracting key insights from contracts, it really just increases an organization's efficiency across the board.
And, you know, touching on, you know, more innovative advances. Part of what makes your neural language model so powerful is that it really isn't just limited to analyzing digital data. When you combine it with your optical character recognition, you know OCR, your natural language processing (NLP)-neural language model (NLM) models can be leveraged to really understand and digitize handwritten text. And so this really starts opening the door towards ingesting anything from your medical notes to on-site accident reports to be very, very easily incorporated into your underwriting's calculation of risk as well as the reliability. So, ultimately, things are about to get faster, more connected, and more personalized.
Robert Eaton: Yeah, I think that's a great point. I tend to think that the companies that can take the most advantage of the sorts of technology we have, to your point, Anne, like, how much of our historical kind of company documents can we harvest and use to search? I love that stat about the amount of time the knowledge workers spend kind of collecting data. It makes total sense, right? I mean, if you all asked about the amount of time you spend kind of collecting and cleaning data, it's a large part of the time to produce a model, and it's the same way with our roles in the knowledge economy. I really love that summary.
Let's toss it over to Joe and then I do have kind of a closing question for you all. But Joe, tell me what you're going to be doing in a few years that's maybe a little different than now.
Joe Long: Yeah, I definitely agree with Michael and Anne that large language models are going to help us iterate and move faster on ideas, help us do less coding to get our ideas to fruition faster. One thing I think there's going to be a lot of focus on, which kind of gets the backseat right now to large language models, is the use of generative AI techniques to develop synthetic data models that can generate synthetic data types of medical claim records or LTC experience files. Those models are becoming very good at being trained on larger datasets and creating synthetic data records that look very similar to the real data that the models were trained on. These can be used to help share data across different organizations. You can just train a synthetic data model on your internal company data that has PHI in it and then generate a completely new synthetic sample that still has the same underlying distributions of the real data but it isn't actually collected from a real individual. So then you can easily share that without having to worry about sharing PHI and whatnot.
One other thing. So, I talked about one of the biggest challenges we have is just cleaning up the data and making sure the different data that we have coming in isn't creating an error or there's fields missing from it or there's different issues with the data. That takes a lot of time, human time, to go through and look through data, do different summaries and aggregations to see if there's anything weird going on, if any data fields should be excluded and whatnot. If we're able to train synthetic data models on really good quality data that we've already cleaned up, we should be able to then take those models and use them as a way to identify, when we have new data that comes in, if the records and the data might have errors in them, they might have been created by a mistake in the system that creates it. And I think using synthetic data-generation techniques can be used to help explore and do a pre-cleaning of data to exclude records that might be bad or made in error.
Robert Eaton: I love that thought. I was just getting introduced to that concept of synthetic data as someone described it to me where they used, and this was an introduction to me, like a generative adversarial network, where they kind of fought with each other to create data that was representative, as you said, but maybe doesn't communicate PHI in the same way that the true data source would do. I think there's really a whole lot of opportunity there, Joe, so thank you for bringing that up.
Team Stochastic Parrot or Team Sparks AGI?
Robert Eaton: OK, so I want to close out this really great discussion. I can't thank you all enough. One of the questions that has been arising on the various chat boards is, ChatGPT and all the language models, the Large Language Model Meta Artificial Intelligence (LLaMA), the version 2 that was released kind of publicly for open use, there tends to be kind of a dichotomy, if you're on “Team Stochastic Parrot” or “Team Sparks of Artificial General Intelligence (AGI),” whether or not you think these things are going to lead to real artificial general intelligence or whether you think they're just kind of like dumb stochastic parrots. And I'm going to go around the horn and see where you land on this. Mike Niemerg, what do you think? What's your fear-to-excitement ratio about this stuff?
Michael Niemerg: I think my fear ratio is pretty low. My excitement ratio is pretty high. I've gotten to the point where I don't make predictions about artificial intelligence stuff anymore because no matter what you say you're going to be wrong. [laughs]
Robert Eaton: [laughs] I accept. So maybe I could be a tiebreaker if Joe and Anne disagree. Joe, what do you think?
Joe Long: Yeah. And I guess maybe it doesn't get totally to general artificial intelligence. But one thing that's really interesting is there's been some researchers now that have trained large language models on people's brain waves while they read a book or they listen to a podcast and then they go and have those people then read a book or listen to a podcast and it can transcribe what they're thinking. So I think that's going to be a very interesting advancement that I'm curious to see how that goes and it might get towards that general artificial intelligence.
Robert Eaton: Well, that's bananas. Anne, what about you? Where do you land on this?
Anne Lin: I think I might tip the scales more to Mike's side. You know, fear, low; excitement, high. This is coming from the sense that the studies have shown that these models have gotten incredibly comprehensive, incredibly smart, but they still are quite lacking common sense or logic, for lack of a better word. So I think definitely much more learning. And until these models can start capturing information and data that human beings can, in terms of visual representation and forming those ultimately higher derivations, I think we still have a ways to go.
Robert Eaton: I'm also very low fear, very high excitement, so maybe that puts me more on stochastic parrot. But I can't believe that this general language model that we play with today answers questions on some further prompting that I really would put way outside any kind of general capabilities. So, I'm very high on excitement.
So that's going to wrap up our podcast today. I really want to give a great big thanks to Anne, to Joe, to Mike—that's Anne Lin, Joe Long, and Mike Niemerg—for joining me today. You can learn more about Milliman's insurtech expertise, machine learning, and predictive modeling and some of the topics we discussed today at Milliman.com.