Artwork for podcast Almost Nowhere
Actuaries in the Age of AI: A Discussion with Josh Meyers
Episode 113th February 2025 • Almost Nowhere • The CAS Institute
00:00:00 00:45:02

Share Episode

Shownotes

In this inaugural episode of the Almost Nowhere podcast, hosts Alicia Burke and Max Martinelli discuss the origins and goals of the AI Fast Track initiative, aimed at bridging the gap between AI advancements and the actuarial community. They are joined by Josh Myers, a key contributor to the program, who shares insights on course design, the importance of discussion boards for community engagement, and the evolving role of data in actuarial science. The conversation also explores the potential for personalization in insurance, international perspectives on rate making, and the future of predictive modeling. The episode concludes with advice for data professionals looking to enter the insurance industry and book recommendations from Josh.

Transcripts

Alicia Burke:

Hey everyone, welcome to Almost Nowhere. This is a podcast dedicated to everyone in the P&C insurance industry and a bit beyond. I'm your host, Alicia Burke. I am the director of the CAS Institute and I'm joined by my co-host Max Martinelli, actuarial data scientist at So Max, I guess we'll get started with just a little bit of history of how we started working together on the AI Fast Track, which is one of the things we'll be...

focusing on today and also how this podcast came about and this concept of almost nowhere.

Max:

Yeah, I mean, think we met shortly after maybe it was the spring meeting where I first gave a presentation on artificial intelligence. And the goal was kind of to tell actuaries that you're actually using AI already. And despite all the hype around it, there's actually a lot of substance here and you know more about it than you realize. And in that presentation, people really enjoyed the presentation, but they kind of walked away with a bit of anxiety.

And was interesting because our friends over at the CAS linked me up with you and you were kind of working on AI initiatives for the members. And your research was kind of showing people feeling that same way, like falling a bit behind. So that was what spawned the AI Fast Track. And that was a great showing and we're going to be doing more of those now just due to the popularity. But it was interesting just based on all the discussion, kind of this feeling of falling behind, even though we see all this opportunity.

And people wanted the conversation to keep on going. So it felt very natural to find a new mode. And I think the podcast is a bit experimental, but we hope that we can kind of keep that conversation going. And the name for Almost Nowhere, I mean, for the nerdy math folks out there, you might remember this from Real Analysis where, you know, something has like a measure of zero. But I just love the title because it kind of taps into that idea of like, there's this road to infinity. There's so much stuff we can do, but it feels like, you know, we're almost nowhere in terms of progress.

So we're hoping that we don't make folks too anxious. We resolve that anxiety, but we tap into that excitement around this.

Alicia Burke:

Yeah, definitely. I'm also just really excited about this opportunity to maybe share some things that are going on in the insurance industry that could be used outside by others and things going on outside that could be used by actuaries or data scientists who work in insurance. So I think it's like a really great opportunity maybe to have some conversations and build some bridges that are just helpful for both sides.

have a very special guest joining us today. Very special first.

Max:

I so. A very special first guess. So yeah, think

Josh Myers, who is my colleague at Akur8 and was actually one of the course designers and the lead authors, Josh felt like a very natural person to bring onto this because he's going to be part of the subsequent sessions. But also a lot of people really enjoyed his content. I think that really saw the tying of the use cases with Josh's content. He's just got such a brilliant mind.

really creative guy. Josh is a fellow of the Casualty Actuarial Society. He's been in property and casualty for a number of years. He has a master's in statistics. He was at Hartford, now he's at Akur8. So pleasure to have him as the first guest.

Alicia Burke:

Thank you for coming, Josh.

Josh:

Yeah, thanks for having me.

Alicia Burke:

I do want say when I was looking at the feedback from AI Fast Track, there's positive feedback all around, but definitely a lot of comments calling out Josh specifically. People really loved the segments you did, commented how it was so easy to understand, and there were a lot of concepts that they hadn't really thought of before.

Josh:

I think it was a fun program, I think one of the best parts, and we'll get into this a little bit later, but was that comment board in, being able to see all that feedback and really that build kind of like a community through the Fast Track.

Max:

Well, yeah, Josh, maybe as a course designer, I do want to get to the discussion board, but what were your thoughts going into it? Because I think we were kind of shooting blind, right? We didn't know exactly what was going to come of this if we were on the right course in terms of the level of how technical it was. What do you think was going on when you were planning this and how did it meet your expectations?

Josh:

I think one of the things that was hard about the course design was it was a fast track, but AI is such a big build. So you covered search, I covered reinforcement learning, rules-based We had a course on deep learning, large language models. So we covered a wide range of topics. And so there's quite a few things I we were thinking about behind the scenes, putting that together. First, I think was what's the actual agenda or maybe the syllabus going to be for the fast track.

Are we going to focus simply on the hype of large language models and talk about that for five sessions? And where we landed was we're going to maybe take more of a textbook approach and give a wide overview of AI, talk about different types of AI, talk about the history of AI, while seeing how it can be applicable to the work we're doing. So I think that was the first challenge was, okay, what is the syllabus going to be? And I think the second challenge was probably figuring out

Okay, how can we talk about AI in five hours? So behind me, I actually have one of my deep learning textbooks from college and we had a one hour course on a two semester class. And so you can see, I think that was one of the hard parts was, we want, we're actuaries. Like I love diving into the details. I love knowing how the math works. And I, sometimes I,

see the application after I see the math. And so there's the balance of how much do we get into the details versus how much do we focus on what is actually applicable today.

Max:

know, Alicia kind of alluded to about your use cases where the rules-based we kind of had a novel connection between like reinforcement learning and rules-based AI, but we gave people these real use cases where they could walk away with it. And it opened up combinations of other analyses with like model monitoring. That really excited people. Do you have any thoughts on?

on that and I know that we're adding a session to the subsequent runs where we kind of go more into use cases.

Josh:

Yeah, so just to dive a little bit more, one of the sessions was on rules-based AI and reinforcement learning. So rules-based AI is thinking about, there's experts in a lot of different fields, and those fields can, what we call it, engineer their knowledge into intelligent agents. And so these can be systems that help doctors make decisions, systems that help people approve loans. We also talked about reinforcement learning where instead of

and expert knowledge engineering, we have these agents that learn from themselves. one of the big discussions we talked about is, especially with that rules-based AI, when we think about us as actuaries, we really are experts in our field. lot of us have passed our exams, thousands of hours of study, going into Actual Science, we have years of experience, and we're uniquely positioned because we really understand the math and the business.

Alicia Burke:

Thanks.

Josh:

getting into some

of these use cases, think about we can program our own rules-based AI, where we can start monitoring our results. So if we have models, we can have these if-then statements of, if our models start to deviate when we refit them on new data, let's have an AI system trigger a notice that we should maybe focus on our models. Maybe if our incurred losses are coming in higher than expected, we can have a model.

Alicia Burke:

Thank

Josh:

tell us, hey, we should start to examine our trend assumptions. So there's a bunch of different use cases where we as actuaries can build these systems to really help monitor our results.

Max:

Well, I want to double tap into the discussion board that Josh kind of alluded to earlier. And Alicia, that was your idea originally about the discussion board. And, you know, I've taken a whole bunch of online courses and they often have a discussion board and it's usually a bit of a chore. It's a way to, you know, get participation in there, but we, we offered people the ability to just say, you know, if you have nothing to say this week, don't, don't say anything, but really to start that communication, propagate ideas. And I think Josh, you and I were both.

Really excited at the results of this. So what were some of your takeaways from the discussion board?

Josh:

Yeah, the first thing was I was really impressed with your level of engagement. I've been in similar classes where you have to do your three responses, you have to respond to four other people. it didn't really feel like this. I think one of the things that helped was this was an optional course. And everyone who joined the Fast Track was going there because they wanted to learn and they wanted to collaborate. So that was first thing that I

felt that sense of collaboration. And then the other thing I also really enjoyed was during our presentations, we were actively asking for feedback. Max, a couple of times you said, I want people to disagree with me in the discussion board. And people did. They offered us feedback. They disagreed with what we said. And I think that's the type of stuff that really gives like the authentic conversations is when we're

able to have conversations saying, that was a good point, but did you think about this? Did you think about that? think a lot of that came through in the discussion board.

Alicia Burke:

I would add that it did turn out to be, think, one of the most valued parts of the program, because there was a lot of feedback saying that networking component was probably the best thing. And we're keeping that discussion board open for five years. So we know that because this is a topic that's really evolving, it gives people a place to share those ideas and brainstorm together. it's like think it was really embraced.

just because of the topic and it lends itself to an ongoing discussion. This isn't like a class that you just go to and walk away and don't need to revisit.

Max:

Yeah, for sure. And I mean, this industry is small but very compartmentalized. giving that discussion board where ideas can kind of like propagate across these traditional barriers of like function or line of business, right? This is kind of a new concept that you typically would only see at the conferences. And this is now happening in live lifetime. So really cool. You did bring up the disagreement part. So I want to double click into that a little bit.

The one part where we asked people to really disagree was with that bold claim that I made about, you know, that actuaries will be managing hundreds of models in production. And this stirred up quite a bit of a debate that I'd say, you know, small group of folks agreed with me, but most people disagreed with me. And so selfishly, this is why I wanted to add that presentation that you had on unlocking the full potential of models, because it really starts to open people's eyes of all the use cases that they can do on here. So,

Do you mind commenting on some of those exotic use cases and we'll have to let people join and see what they are. But any thoughts on like the relevance of this and some of those really curious opportunities.

Josh:

so maybe taking a step back. So Max is talking about, I've given a presentation, we call it unlocking the modeling mindset. the basis behind this presentation is oftentimes when we think about models as actuaries, we're thinking about frequency models, severity models, lost cost models, models that are being used to price policies. don't get me wrong, I think those are really important models because that's how we earn our living as an actuaries helping.

protect the top and bottom lines of the insurer's books. But the premise of this presentation was we have these great modeling skills more and more software's technologies are coming out that allows us to model very quickly. And so with these skills, with these technologies, what other problems can we be solving with models? And I think a lot of times models can help us solve questions that we're already answering better and faster.

than what we're doing currently. And so a couple of use cases, and maybe I might pull something up, but we talked about mix of business. So we say, okay, classic actuarial analysis, you're looking at your base rate indication. Your base rate indication's higher than expected, and it's due to a negative written premium trend. How might you use a model to dig into that? And then that presentation, I say, hey, you can use a model, and in an afternoon, you can really figure out the top three or four

five drivers of your mix shift over time to explain that written premium trend. We're also thinking about use cases like premium leakage, whether that be through underwriters giving too many discounts or whether that be through agents giving too many discounts or maybe premium leakage maybe audits or misclassification of exposures for workers comp.

There's a lot of these questions that we can be answering with models that give us a multivariate view.

Max:

Do you happen to have the Snowtire discount handy? Because I think that is maybe the most interesting one, but potentially the most useful one where someone can say, I'm solving a business problem with this.

Josh:

during the presentation, one of the things I talk about is you can use a model to understand where discounts are given for snow tires. And so this is my home state, Utah. I live in the Salt Lake City area. And one of the things that's fun about Salt Lake City is I live an hour of at least four different ski resorts. So we're getting a lot of snow up here in Salt Lake City. But the thing that's fun about our state, if anyone listening to this has been to Zion National Park,

That's down here in the Southern state. so a lot of people are retiring down in the Southwestern part, because it's really temperate there. the winters don't get nearly as bad as they do up here. And so kind of the premise behind this was, let's say you get to work one day and someone's saying, Hey, I think some of the gaming in the system, maybe they're using this. No tire discounts wrong. Can you dig into this? And that's something that actually really works well within a modeling framework.

We're going use a geospatial component to see, how are discounts applied across the state. But the thing that's actually really cool about the models is we can start controlling for other variables as well. And so different type of vehicles, vehicle type or vehicle weights, maybe different agencies, right? Different mix of snow tires. And you can start controlling for those effects when you're doing this analysis. And it's kind of just all rolled up in one as you're building them.

Max:

Yeah, I mean, I love this. And I think of other variables like mileage, because you can envision someone being on different side of the state, but they drive off into an area that has more snow. So they required and we'd want to account for that when we're getting this multivariate view. Something like number of times renewed, right? Because if it starts trickling on later into someone's policy, maybe it is being used as kind of like a way to save the policy from cancellation. So this is really cool. I think what

is so great about it is that once you run this analysis once, if you kind of pair it with that rules-based AI, this can be running, you know, once a quarter, maybe that's too responsive, but that analysis can just notify you if something has changed materially.

you've given this presentation before, but I do have a question. Why do you think this one was so popular? Because you and I have given a number of presentations, and we often look at how popular they are after the fact. And we've had some very popular presentations, but this one was kind of like a grand slam. So I think it even smashed the stuff with the demystifying artificial intelligence presentation that spawned this whole thing. Why do you think that is?

Josh:

I think that's a question. think we answer lots of questions as actuaries, especially when we think about maybe some of the smaller insurers. The actuary can be wearing their actuary hat. They can be wearing their data science hat. They can be wearing the data engineer hat. And oftentimes, even at large carriers, we get these questions on our desks. The things we talked about, like prior carrier analysis, like what attributes.

are more likely to be coming from a prior carrier? Are you being adversely selected against? Snow tire discount, like why is there any premium leakage? Mix of business. There's a lot of these questions and I want your thoughts as well, but I think one reason that it resonated is because I think models are powerful. Maybe I'll say we spend a lot of time in pivot tables and I'll pivot to pivot tables. we have our data and we're saying, okay, if we're trying to answer,

prior carrier analysis, okay, what type of attributes are we more likely to get from a prior carrier? We could spend hours in a pivot table saying, okay, what about this cut, what about this cut? Okay, this cut's cool, but it doesn't have much exposure. We're saying, oh, this cut's cool, but it's also correlated with this and this cut. And the model automates all of that. If you have the data for a pivot table, you have the data to build a model.

And the model is going to not only surface the variables that best explain whatever question you're trying to answer, it's also going to give you a rank order list through variable importance of what's important driving it. And so I think maybe one reason it was impactful is because people saw, I can be using models to go faster and more accurately answer these questions.

Max:

I'm curious too, based on your research from talking with members, the discussion board, is that kind of what you think it is that people are starting to fully realize the multivariate nature of this? Any insights that you have from the members? Not to put you on the spot.

Alicia Burke:

Yeah.

No, I do think especially those who had been in the fast track and were in the discussion board, that realization that, yeah, there's a lot of things that I already do that fall under this broader definition of what it means. think a lot of people are thinking of the buzz side of AI, the generative and like the new emerging stuff and forgetting.

the real scope of it and how much they've already been like pioneers in using this. So yeah, that lack of confidence, I think I saw it shifting in the people who were through the program, like, I didn't realize, like, I'm already using this. they, yeah, really change in tone. But I was just thinking as you were...

Max:

Mm-hmm.

Alicia Burke:

talking about the pivot tables, like of course, if someone is feeling that lack of confidence, we have more AI fast track cohorts, it's a great place to be, but are there any other things that you might recommend, just like short-term things to help them feel a little bit more confident? What are some quick things they could do to feel like they're more aligned with all the buzz that they're hearing?

Josh:

I'll let you go first on that one Max.

Max:

Yeah, I mean, I don't want people to think that they have to start reading a ton of textbooks. And this is maybe one of the things that we observed that I would often show in the slides, textbooks. And people started creating these reading lists, and that's great from enriching your knowledge. But I don't want people to think that's a requirement, that to start using this stuff that you have to become an expert in the underlying fundamentals. Because an electrician doesn't need to know

how electrons work to do their job very well and deliver a lot of value for their customers. So I can more easily tell folks that like, you don't have to just, you know, go start reading a bunch of textbooks. I do think the sooner you could dive into the multivariate stuff and get your hands dirty, because it's so easy to talk yourself into like, need to read more books before I can actually do this. Or I need to learn how to code in pandas or data table before I do that. It would be more just starting to jump in

because it's the reps of actually using it that kind of gives you that knowledge where you can start tying it to those use cases.

Alicia Burke:

Josh, did you have any?

Josh:

Yeah, maybe

just to jump on, add a little bit there is, I think it's just like when Max said the reps, it's just putting time towards Oftentimes we're really busy, but you say, the last two hours on Friday, you want to think about what questions are my facing? And you want to think about all the stuff I'm hearing about and just start trying to approach it. I think giving yourself that dedicated time to either start tackling the problems or talk to other people who tackling problems the way you want to.

can really jumpstart you doing this.

Alicia Burke:

just a follow-up question. Thinking from the organizational perspective, so not the individual who might be able to jump in, but from the organizations who maybe are still on the outside, what type of conversation should they be having to be able to jump in a little more easily?

Max:

Yeah, I mean, I think one of the things we made pretty clear to the folks in the fast track is that we want them to be at the table when they're discussing strategy. think actuaries, you know, sometimes if you've seen the Dunning-Kruger curve, like the more you start to know, the more you start realizing you don't know. And so this can kind of work towards the actuarial folks where they start having doubts. And our goal was to really tell people that, you should be at the table when discussing strategy. And so

making this kind of an internal priority, expressing that, you know, might not have built production grade predictive models before, but we're going to start experimenting with this because that domain knowledge is key, right? It's not the crunching of numbers. It's the domain knowledge and actually getting your hands dirty can really be part of the process. I do have a question for Josh though, because in this vein,

One of the big things that people brought up in the discussion board was that the lack of data in the format that they needed, they found is the biggest limitation. this is, know, actuaries are not database managers. This is typically outside the core competency. So I know you've kind of been an advocate for actuaries owning more of like what comes downstream. Do you have a recommendation on the solution for like what comes upstream? Like should they actually start owning some of that? Or is that just part of the strategy conversation?

Josh:

So my initial thoughts there is, if you look at the companies who are having the most success, who are repeatedly having the lowest combined ratios, lowest loss ratios, they're the data drive organizations the people who are making the most out of their I think the first thing is data is important. As far as the strategy there, I think that's to your point, like that's where actuaries need to be at the table.

And I think each organization is going to be doing it differently. I don't know if you hire a dedicated data engineer, you work with a consulting group that has a lot of experience in getting data into a format. But data is a very big currency in insurance and those conversations definitely need to be had of how can you get the data you need to realize your competitive advantage.

Max:

Yeah, it's tricky. it really felt like that was out of all the objections or even people bought into some of the stuff that we're talking about modeling and they could see that the use cases, this was the biggest limiting factor. And we kind of make that case throughout the presentation that like, yeah, machine learning, one of the differentiations of this form of AI from others is that input is data and it's kind of a necessary component.

So definitely a tricky challenge. I do see the value of data engineers who are maybe reporting to the actuarial folks, but that would be a little bit of a different setup than I think most places have seen before.

I do have one question to ask you that's a bit more exotic. And we kind of use this word already to describe some of those use cases that you had. But most of those use cases that you provided were things that were very within reach of actuaries, right? They can easily connect that to their business problems. I wonder if you envision things that are maybe five, 10 plus years down the road that could be use cases. And in particular, the one that I was mentioning to you before was about personalization.

You know, like the internet and like Etsy is just, everything is just about personalization, individualization. so insurance still tends to feel like, maybe we've made it so you can pick your own deductible or something, or get like an accident forgiveness discount. do you envision personalization kind of being at the next frontier, or is it maybe a fool's errand to go down that road?

Josh:

Yeah, I definitely want to hear your thoughts as well, because I know you spend a lot of time thinking about this. So maybe piggyback off what I say, I think for sure. So when we think about some of the insure tech startups, when we think about maybe just e-commerce in general, there's a lot of personalization. And so when I'm thinking specifically for personal there's personalization around limits, deductibles, personalization around what optional coverages you get.

And if you think about the standard model, you have the agent that works with the policyholder. And the agent is really doing that hyper personalization for them. So they're helping them figure out, okay, what are your coverages that you need? What are the limits you need? What are the deductibles you need?

But I'm seeing a generational shift and very few people from my generation, or maybe younger, are going to agents for their insurance. And that might change as we get higher net worths we need a little bit more guidance on how to protect the wealth that we have accumulated. But I do think there's a generational shift where fewer and fewer people are gonna be working with humans who have been personalizing these coverages. And so what that gets into is I think...

We can oftentimes use models, machine learning, a lot of this stuff to start to do that personalization from the carrier side. Based off of how they're answering questions, based off of data that's coming in, can we help them choose deductibles and limits that are better for them? Can we help them choose the coverages that best meet their needs?

Max:

So CAS Monograph 5 kind of talks about the challenge with modeling things with customer selection. Just making you think out loud, but do you think it kind of breaks some of the math or has some implications if we start making selection a bigger part of the risk characterization?

Josh:

I mean, thinking through it, there's going to be that selection bias. There's going to be all those things that you need to think about, but people are thinking about these things. And the people who are successful, like we talked about large language models and earlier in our conversation, we're saying that's really cool. There's not a ton of immediate applications today for actual science, but this generative AI in a couple of years, it's going to be revolutionary.

revolutionizing a large part of the insurance process. I think it's similar with this personalization where it's coming and the people who figure out how to do it are going to be the early adopters that have the success. And so yes, there's definitely a lot we need to consider about how does this impact our assumptions that we've been making, but we can think through that and the people who do think through that are going to find success once we hit that point.

Max:

Yeah, it seems like a big bet for sure. And so you can envision someone putting a whole lot of work into it and either it works out really well and they have a head start or a lot of sunk cost in the time. I am curious by it because I I've been hearing about this idea of personalization and insurance for a while and it hasn't quite been correct, but it does feel like the elements are starting to form where it can finally be done.

Alicia, any thoughts on personalization? I'm curious too, because you kind of get to work with members all across the board, different lines of business, and it feels like a very personal lines focused question.

Alicia Burke:

Mm-hmm.

is there ever a risk of over personalization? Is that going to be overwhelming to the market or do you feel it would be so personalized that it would be seamless, especially with that agent?

being removed from the scenario for some of the younger generation.

Max:

I mean, I'll tell you, I go to Subway and I get overwhelmed from all the combinations. And we're at the point where my wife is, I'm not joking. My wife is literally ordering my food for me because I get so overwhelmed. insurance is such a complex product that that's kind of the argument for agents is that they tend to have that knowledge where they can help you kind of pick what's right for you based on your needs, your tolerances. So I definitely think that's that's part of it. that, you know, when you're talking about the behavioral psychology of

Alicia Burke:

Yeah.

Ha ha.

Mm-hmm.

Max:

of buying insurance, that's something that probably has to be factored Josh, is your wife ordering your food for you?

Josh:

but I more often like her dishes more than like my dishes. So I should let her order for me because he knows my taste better than I do. definitely like I'm thinking about things like your decision fatigue. Sometimes I don't think more choices is necessarily better. I think sometimes it could actually start diminishing.

Alicia Burke:

She's a food agent.

Josh:

Maybe like fulfillment and happiness and this whole other podcast we talked about this stuff. But I think when we get to the personalization though, human psychology is a big piece of it. And when we think about insurance in general, really like the purpose that comes to the market is like we're helping secure people's lives. getting an accident can be devastating and hopefully insurance is that product that helps those devastating events be less devastating. I think

Alicia Burke:

Hahaha.

Josh:

That basic mission of insurance really needs to be taken into account when we're thinking about this personalization of saying, how can we really help people have the protection and the financial security that they need? I think again, going back to the product piece, I think that's something that they're actually better at than actuaries as well as like thinking about maybe the actual individuals and that psychology piece of it.

Max:

You know, to that point too, I'm in your use cases from unlocking that modeling mindset, there's that one with quote friction where it's like the number of questions that we ask people before we present them with a price. And, know, if you ask someone 20 questions, they probably don't complete the quote. And if you ask them three, you might leave some information on the table that's relevant to segmenting out that risk. Feels very related to that, but I also wonder...

You know, is this outside the core competencies of actuaries or is this probably some sort of net of value, but by understanding that behavioral component with the pricing component?

Josh:

Yeah, and it's I'll give the actuary answer is it probably depends. If you think about a really large insurer, that's probably going to be outside of the scope of an actuary because there's going to be a whole data science team, whole marketing teams really focus on that. I do think the actuary should be at the table thinking about how does that impact our rate adequacy? How does that impact our top and bottom lines? But as you kind of maybe move to those smaller insurers, I definitely think that's a place where the actuaries

play a bigger and bigger role. And maybe, like to your example there, maybe some people do need 20 questions, but from just that user experience, you might have something that say based off their answers for the first five, they only need to answer five questions and making that into having just a better user experience.

Max:

Josh, want to get to speaking of trends, right? I have to ask you this question because you and I have both really been intrigued by this. We joined Akur8 which is a European company originally. And we started getting international exposure to how rate making is done in Japan, France.

Alicia Burke:

definitely.

Max:

all these places around the world. think one of the things that you and I were both very surprised about is that they don't have this concept of the base rate indication. They use the predictive models to do their indications. And this felt very natural for you. You picked up on it quite quickly and I've always struggled with it a bit, but any thoughts on this concept? Is this something that we could expect in the US eventually or are we too entrenched in how we do it? Are there pros cons?

Josh:

I definitely agree. It was really odd to me talking to our colleagues Europe, across the world, and having a really big communication gap around a base rate indication. Like for us, it seems so natural, like, okay, if you're going to raise rates, you're going have your base rate indication, you might do some segmentation on top of that. But for them, that was a really hard concept to grasp because like, why are you doing it in two separate steps? Why do you have this entire process? And don't get me wrong,

I think they're doing a lot of the same things. They're unlevelling premiums, they're doing development, they're incorporating historical and prospective trend. But instead of having two separate processes, they have their single or the single process with their models. And so their models are able to say, okay, this is how your segmentation should change. This is what your average rate should be. They're really all doing it in one

One of reasons we don't do that here is that's harder for a regulator especially 20 years ago, to say, okay, what's happening here? How can you charge in how much? I think we have maybe some of that just inertia or momentum that comes from how we've always done things in terms of regulation. Whereas it's interesting where they're less regulated in Europe and some of these markets, as soon as they came up with this better methodology, they were able to adopt it. And that's gonna save...

It's going to save employee counts. It's going to allow people to dive more into the results that actually drive profit. And so getting to your actual question, think, yeah, absolutely. This is going to come to the US. I think companies are already doing it. I think people have their internal best models. So, we have our external models that we're going to be using for filings and all that. But we're going to be running our businesses off of these internal models. So I think it's already here.

Max:

I mean, I was just going to say, mean, it so begs the question, what were they doing before they have the machine learning methods? Because I, it feels like something that's kind of tied to the machine learning methods, but maybe they were just doing it with very similar methods to what we have. But I am curious to, some degree, while, you know, the way that the base rate indication is done feels very natural to me. The machine learning methods start to make me realize that we might have been overfitting before, or at least, you know, just kind of like entirely fitting to the data because

This whole predictive concept of cross validation or holdout data, we were doing a predictive task before, but we were essentially using all the data. So even just the benefits of being able to do it in one process, it also seems like you might have a more robust process.

Josh:

Yeah, I think one of the things that sets us apart as actuaries is we maintain a large number of assumptions. So no matter what, we're going to have to have a perspective trend, no matter what we're going have to do development. And I think being able to put it all in one place and have maybe like, I'm thinking towards the future here and maybe people are already doing this, but you can start running simulations on your assumptions. Okay. I have.

my selected prospective trend, let me run through a distribution of future interest rate scenarios. All these different things we can run simulations on. And I think that's really where we're gonna be able to maybe advance the sciences as we move more and more into machine learning, we get a lot of these benefits. They'll say, okay, so much as actuaries we focus on, okay, what's our point estimate? Like most time base rate indications are gonna be that point estimate.

But as we maybe advance the science there, we're to be able to get ranges around this. We're going to be able to do simulations when senior management asks, like, what if the market starts to harden or what if it starts to soften? We're going to be able to put that into our indications and to our segmentation. And I hope we talk about this later, but a lot of our colleagues in Europe are also doing demand models as well. And when you tie in your loss models, your demand models together, you get a really cool picture.

they can be using for financial forecasting to say, what's the business actually going to be looking like a year from now?

Max:

Yeah, I do want to go into that a bit because I think what's interesting too about some of the international exposures that we've seen how they do price sensitivity, demand modeling, financial forecasting. But we've also had the pleasure to work with people that aren't actuaries like product managers and they view the problem so differently where they're kind of looking at like the competitive landscape and the distance to market on certain price segments.

Do you have a preference on what you think is better, the mathematical machine learning based on prior renewal data and quote data, or like, prospectively, what's going on in terms of your placement in the market? Any thoughts on this?

Josh:

you have to look at all of it in the sense of your math can say something, but if your math tells you you need a 20 % increase, but you're converting five times less than that segment already, what's that telling you? is your placement in the market, something's missing there. And so I think you really needed to look at the whole picture. I think that's where the product team brings a lot of value is oftentimes I think is actuaries were really focused just on the numbers and the math.

But they're really hyper focused on execution, tracking the P &L, seeing, okay, how are we actually performing? And I really think to run a business, you need both perspectives. You need that data-driven piece, but data doesn't run the business in the sense of you need to have your judgment on top of it. You need to see, okay, is my data matching what I'm seeing in the market?

Max:

Yeah, Alicia, think if we were to do subsequent boot camps that weren't focused on artificial intelligence, think things with like price sensitivity, competitive positioning, kind of the behavioral element of like why customers buy, that would probably be huge. That could be very valuable for the community.

Alicia Burke:

Yes.

Yeah.

Okay.

Yeah.

Max:

I don't want to push you too much, too many questions. It's kind of nice to have you cornered here where I can just tap into that mind of yours. But there is one other thing I want to touch on before we wrap up. And it's kind of the prevalence of lines of business that are jumping into predictive modeling that we historically haven't seen. Like this used to be mostly personal auto and then maybe some personal homeowners. More recently, you know, we're seeing tons of lines of business adopt these, these methodologies. And I think they made up a pretty sizable portion.

of the cohort in the AI Fast Track. The one that popped out to me was Workers Comp. And you actually have some experience in that line of business. I'm just curious if you have any thoughts on the utility there and kind of those use cases that you mentioned with like the premium leakage for modeling like audits, because the audits are so expensive. Seems so useful. Is this something you think will continue or is it too early in the game?

Alicia Burke:

you

Josh:

No, I go back to what I said earlier, like look at who's winning in the market. Look at who's having the best combined ratios, best loss ratios over time. And it's going to be those insurers who are making the most out of their data. so answer your question like, absolutely. think it's happening today where, okay, there's a lot of finite resources. So

Audits are a big thing. How can you prioritize which policy should be audited and which ones you can just send a letter to? mean, audit in person, which one you can just send a letter to? When it comes to pricing as well, I think the NCCI does a really great job with their bureau rates, but I also think there's a lot of segmentation you can add on top of that. Thinking about things such as territory, prior frequency, average wage. Some of these are somewhat considered in the NCCI reading algorithm.

but there's a lot of additional segmentation that you can get by looking deeper into the data, which will give you a little bit better prices on your policies or a lot better prices on your policies.

Max:

Yeah, really interesting. now I've got a few workers comp clients. So I often have to lean on you because I haven't done that before, but it's such an interesting line of business with seemingly a ton of opportunity.

Alicia Burke:

And I had a question just on the theme of building bridges with other industries. If someone is a data professional outside of the insurance industry, but they want to jump in and bring their skills to this market, do you have any general advice for those folks who are those career changers

Max:

Yeah, my thought is insurance is hard. And we kind of mentioned this in the fast track that I had just taken a graduate course on deep learning and did a project where I persuaded my team members to work on a property and casualty data set. And they were, when I was telling them that the GLMs would be the benchmark, they were, you know, I that was funny. But then the deep learners got crushed. And so their reaction was insurance is hard.

This is harder than anything they've modeled before. So my takeaway from that is that there is a lot of opportunity to grow into this industry. It's very domain knowledge heavy, and that's maybe the best way to overcome the challenge of how hard it is. yeah, participating with the Casualty Actuarial Society, feels like a plug, but it's actually very relevant to getting in there because that domain knowledge is critical for success here.

Josh:

Yeah, and a couple thoughts there. One is the advice I would give leave your prior training at the door when you come into the field. love talking to people who have training in economics because they just think about problems in such a different way. They're thinking about causal inference. They're thinking about the behavioral economics piece of it. We have a lot of these people from different mathematical backgrounds who just

approach problems differently. And to Max's point, there's a lot of background that we need to solve the insurance problem. But if we're thinking about things from like a causal inference point of view, from behavioral economics point of view, that can allow us to come to a better answer. And so my advice there is yeah, bring your prior training and help us come up with better answers.

Max:

Josh is very inspiring. And Josh, one of the ways that you inspire me is that you're a prolific reader. Every time I talk to you, you have like 20 book recommendations. Curious what you're reading right now or anything that you've read recently that you recommend to the audience.

Josh:

So I'm actually going, I'm going down two tracks right now. I'm actually going down memory lane, rereading some of the books I've really liked over the past five years. So I went down to like biography, rabbit hole. It started with Alexander Hamilton when the play was really big. I definitely jumped on that bandwagon. Absolutely loved the book, rereading it now. Mostly because I like hearing just their stories like.

You get to hear the hero's journey through a lot of these stories. And you start to hear, okay, how are these people approaching problems in their life? And I'm also kind of a sucker for the self-help books. So I've been reading Tiny Habits by B.J. Fogg. Love that book. He takes just any behavior and maps it down to a single framework. And it's kind of fun applying that framework to the rest of my life. So Alexander Hamilton, biography, and then...

Tiny Habits by BJ Fogg on my list right now.

Max:

Can we expect a self-help monograph through the CAS from you?

Josh:

Yeah, if we did, it would probably be a surviving exam season. A managerial and student's point of view.

Max:

Well, I think it was a great interview, Josh. mean, I hope that we can have you back, but you are going to be participating in those subsequent runs of the AI Fast Track. I think we've actually given you more content to share during those. So I'm excited for people to get more Josh time. Do you have any parting thoughts?

Josh:

No, thanks for having me. Thanks Alicia for kicking this off. I think it's really fun. I love hearing from other people and one of the favorite parts about my current job is I get to go to every CAS conference and one of my favorite parts there is talking to people, say, hey, what are you doing? How are you approaching this problem? So thanks for doing this. I really appreciate it.

Alicia Burke:

Thank you for being here. I think we have a few final announcements to wrap things up all related to AI Fast Track. So we do have other cohorts coming up in February as well as May. So be on the lookout for those. Also live at the RPM event, March 9th at 5 p.m. there's an AI Fast Track happy hour where we're going to celebrate that inaugural cohort.

That's sponsored by Akur8 and RSM. And we also have a new live stream session. So if you're not going to be there in person, there is an AI Fast Track update. I don't know if Max, you wanted to mention anything else about that?

Max:

Yeah, I I hope people come to RPM because these conferences are great. And RPM in particular is my favorite. If you're very interested in the AI stuff, predictive modeling machine learning, this is the conference to go to. getting a budget couldn't be challenging, but this is the one to at least experiment with. And so for those that can't make it or those that do attend, we'll have a CAS AI Fast Track recap with the whole gang.

So Sergey, who will be joining us for a subsequent episode, Josh, Alicia, myself, that's gonna be at 10 a.m. And so we really hope people join us. If you're curious about what goes on in the Fast Track, you'll get a glimpse of that, but we're also gonna be talking about some lessons that we learned, things that go beyond what we discussed in this podcast. So it'd great if people could come, join us, ask questions, lots to learn from that.

Alicia Burke:

Excellent. And also part of RPM, there is a full day workshop hosted by ICAS, the Data Science and Analytics Forum. So a lot of great content there as well, including some Akur8 presenters. So hopefully we will see you live or at least live streamed for parts of that.

Max:

And as Josh pointed out, we love critical feedback. We want to make fun, engaging, but educational content. So anything you can share with us, we'd greatly appreciate. You can connect with us on LinkedIn, or you can even email us at info at the CAS Institute dot org.

Alicia Burke:

And with that, I think this is a wrap of our very first episode of our podcast and we hope you will be joining us again soon.

Follow

Chapters

Video

More from YouTube