Artwork for podcast Core Conversations
What Are the Ethical Implications of AI in the Property Industry?
Episode 9229th May 2024 • Core Conversations • CoreLogic
00:00:00 00:24:24

Share Episode

Shownotes

A full 67% of IT senior leaders are prioritizing generative AI for their businesses, according to Salesforce data. This statistic underscores the growing importance of AI in today's business landscape and highlights the urgency of understanding its implications.

Although AI is not new technology, over the past couple of years, it has reshaped industries. But with its rise comes a myriad of questions and concerns, ranging from technical complexities to ethical implications.

From forecasting floods to streamlining insurance claims, AI is revolutionizing how we interact with property data and make decisions. But as we navigate this technological landscape, we must also address the ethical dimensions of AI, ensuring fairness, transparency, and accountability.

In this episode, host Maiclaire Bolton Smith and Amy Gromowski, CoreLogic vice president, head of Data Science, delve into these questions surrounding AI, exploring its potential, challenges, and ethical considerations.

In This Episode:

2:35 – Explain AI like I’m a five-year-old.

5:31 – AI not new technology. How long has CoreLogic been using it?

7:05 – Why is data security and integrity so crucial for AI models?

10:03 – Erika Stanley goes over the numbers in the housing market with The Sip.

11:12 – What can we do to limit implicit bias and explicit bias in AI models?

15:52 – What does it mean to responsibly use AI?

18:48 – Erika Stanley talks about what is happening in the world of natural disasters.

20:19 – What will widespread adoption of AI look like for the property industry? Will this ever transpire?

Links:

Up Next: Some Insurers Banned AI — Will Insurtech Bring It Back?

Find full episodes with all our guests in our podcast archive here: https://clgx.co/3HFslXD4 Copyright 2024 CoreLogic

Transcripts

Amy Gromowski:

There's all sorts of efficiency plays-

Maiclaire Bolton Smith:

Okay. Yeah.

AG:

... regenerative AI that individuals can choose to adopt to help them do their jobs better. But it's certainly, I don't see a world, Maiclaire, in which AI replaces the human.

MBS:

Okay.

AG:

I think AI will be an assistance tool, but human will always be thinking about and understanding the dynamics.

MBS:

Welcome back to Core Conversations: A CoreLogic Podcast, where we tour the property market to investigate how economics, climate change, governmental policy, and technology affect everyday life. I am your host Maiclaire Bolton Smith, and I'm just as curious as you are about everything that happens in our industry.

Over the last couple of years, AI has upended the property industry. This technology provides a viable way to accelerate human-centric processes and transform the way we uncover relationships and data. From pre-filling forms to forecasting floods, to processing insurance claims, there is much to get excited about, both with AI and generative AI. In fact, research from Salesforce found that 67% of IT senior leaders are prioritizing generative AI for their business. And that's only one segment of the U.S. economy. However you look at it, the changes that AI stand to bring are huge. However, along with the advances that AI brings, there are also concerns surrounding its use. These concerns range from the technical to the ethical. In fact, some companies have even stopped using the technology for fear of customer backlash or legal consequences.

So to talk about the concerns surrounding the technology, we have Amy Gromowski, CoreLogic's VP, head of Data Science, back on the podcast with us today. Amy, welcome back to Core Conversations.

AG:

Hi, Maiclaire, so happy to be here. And of course, always excited to be talking about anything AI.

MBS:

This is so exciting, and it's been, my goodness, like three years I think since we've had you on the podcast. It's been a while, and this is probably one of the hottest topics around. So really excited to talk about this with you today.

Erika Stanley:

Before we get too far into this episode, I wanted to remind our listeners that we want to help you keep pace with the property market. To make it easy, we curate the latest insight and analysis for you on our social media where you can find us using the handle @CoreLogic on Facebook and LinkedIn, or @CoreLogicInc on X, formerly known as Twitter, and Instagram. But now, let's get back to Maiclaire and Amy.

MBS:

AI, it's really been around for decades, and I think a question people have is, what is AI? I think in today's day and age, everybody thinks gen AI is AI, but there are so many other things that really are artificial intelligence that have been around for a really long time.

AG:

Yes, yes, very true. Maiclaire, you're speaking to my heart. So I recently was addressing this question, if we do this, is that AI? If we do that, is that AI? We're really all kind of hung up on like, are we doing AI? I think a lot of companies are. And with generative AI, it's just that next generation of technology.

MBS:

Sure.

AG:

But we at CoreLogic, at least, have been doing AI and what falls under the AI umbrella as technology has evolved. So I recently asked my daughter as I was trying to think about how do I articulate that, just what is AI? And I asked her, "What do you think of when you think of AI? And she immediately went to generative AI. Now she's 16, so she's grown up with AI as part of her life, she's attached to her phone, and she immediately started talking about how it can help her do her homework and start a draft for her literacy homework.

MBS:

Life as a teenager is way different than it was for us. I can't even imagine.

AG:

I thought it's interesting that that was her immediate response to AI. But I said, "What about when you look at your phone and it recognizes your face and it automatically unlocks your phone?" She was like, "Oh yeah, that's AI." I was like, "What about a self-driving car, like a car that you no longer needs a driver?" And she's like, "Oh yeah, that's AI too." So when I think about the self-

MBS:

It's things we take for granted almost I think, because-

AG:

Oh yeah. So she's just grown up with this idea, right?

MBS:

Yeah, yeah.

AG:

Yeah, for sure. So when I told her, I was like, "The way I'm breaking down AI, you, please tell me, 16-year-old, does this make sense?" But I was like, "It's essentially data." So in the self-driving car, it is a camera collecting a lot of information all the time. What's the distance to the car next to me? I'm detecting lane here. There's a corner coming. I'm getting too close to the car in front of me, so I need to kind of slow down. So it's using cameras, the technology, it's using the data the cameras are collecting.

There's all sorts of machine learning algorithms behind that that's analyzing all that data that's coming in through the camera, analyzing how fast am I going? How close am I into the car in front of me, car next to me? When do I need to turn? I need to slow down to make that turn. So it's just a bunch of decisions. So you really can break it down into the technology, the data, and the algorithm, and then it's taking action based on that. And we do that all day long. We got the data, the technology, we're helping property professionals make decisions.

MBS:

So that then leads to the second part of the question I was thinking too in my head was, I know we often get asked throughout the industry, are we using AI at CoreLogic? And I mean, I think you address that. Yeah, we are. The other thing is, it's not new for us anymore. We have been doing things with artificial intelligence for a number of years. I think people just didn't call it out as being artificial intelligence for so long. And now because gen AI and AI has really taken over the world, people really want to know what are you doing with AI in your development.

AG:

Yeah. That's exactly right. And that is the point, right? When my 16-year-old just thinks about generative AI is the AI and not all the stuff that she's just taken for granted, that's exactly like CoreLogic, right? We've been doing machine learning on data for a long time and solving all sorts of problems doing that, providing insights, saying, "Hey, listen, customer or our clients, you don't have to analyze hundreds of thousands of data points and observations. We'll do that for you and distill it down into this one piece of information that you can use to make decisions." Right?

MBS:

Yeah.

AG:

But that's as far as the technology allowed us to go. And now with the advancement of generative AI, that's our next frontier for sure.

MBS:

I love that. It's the next frontier. It's like taken us this far, and now technology has evolved so much that it can help us get further as well.

AG:

That's right. That's right.

MBS:

Yeah.

AG:

And then you get to more complex like machine learning.

MBS:

Yeah. I guess the other part of this that I start thinking of is, it's all about data. And if we are querying and looking at all of this potentially highly sensitive information, sensitive data, especially we're looking at companies specifically within the property industry, but in any industry as well, there must be ways to handle information that's sensitive or that we have security issues with. And I guess compliance is a big thing too. So can you talk a little bit about that?

AG:

Yeah, for sure. So I can just speak to what we do at CoreLogic and what I would recommend for any organization who's in any part of AI. It's really a comprehensive approach. So you want to be thinking you need an AI policy really and a governance around AI that's looking at the data that it's using. Do you have the use rights for that data? You want to be looking at data science principles, how you design the analytics that you want to do. Is it going to be generalizable? Is it going to be something that I can create in my historical view or my vacuum of data? And when I go out into the real world with this AI, is it actually going to be meaningful? Is it going to deliver the purpose that I've set out for it to do? That takes some data science discipline. That's where the data scientist comes into the AI.

MBS:

Okay.

AG:

And then there's a third component which is about responsible AI, which is, just because I can build it, should I? Does it make sense for me to do that? Is there reputational risk involved in it? So when you start thinking about those things and what does it mean to have compliance around AI, it isn't just about privacy and security. That's very important. You want to be protecting, like for us at CoreLogic, we want to protect our data. We don't want it widely available. The data is the currency, right?

MBS:

Definitely.

AG:

And you also want to have some controls around responsible use of that. So the more data is generally available and for people to start to publish it or put it into their models without some sort of oversight, that can become troublesome for us. So it's the privacy and security. But just very generally, when you're in the world of AI, you have to be thinking about accountability, transparency, fairness and equity, diversity and inclusion, reliability and accuracy. That speaks to some of those data science principles that I was talking about. Is it reliable? Is it accurate? And how we protect intellectual property. Everybody wants to be thinking about those things.

MBS:

Definitely. Yeah.

ES:

% from March:

MBS:

So there's a lot there that I do want us to kind of dive into a little bit. And I'm going to start, I want to get to the ethical part of this. I think it's a really important part of the conversation. But I think the accurate is another part of this, and I want to just touch on that first because I know there has been some talk about AI providing biased answers. I guess, what do you think about either sometimes it provides incorrect answers, sometimes they're biased answers. What do we think about that and what can be done about it?

AG:

Yeah. So your model is only as good as the data that you feed it, right?

MBS:

Right. Yeah.

AG:

So you want to have confidence in the data that's going into the model, knowing about the sources that it comes from, having a process around quality controls, and validating the data that's going in, even the feedback loop. So as you start producing output, having a user or set of eyes being able to say, "Hey, this is good. This is bad. This is right. This is wrong," that feedback loop, sometimes you want to put in place, you can put in place confidence scores around your data. I'm confident.

MBS:

Oh, interesting.

AG:

I have multiple sources that all agree with each other, or I know that the source that this came from has a level of reliability and trust in. So there's a lot of considerations around your data, and that's one key element around bias. There's some data science principles around having a representative sample. So the observations that you feed your model, you want to make sure it's representative of the world in which that model is going to live and inform, right?

MBS:

Okay. Yeah.

AG:

So I don't want to ... If I'm going to, as an example, have a model that predicts ... Away from generative AI for a minute, but let's just talk about looking at property values and I want to project property values, it wouldn't make sense for me to train a model only on the State of Texas if I'm going to then go apply it to Massachusetts. Right?

MBS:

Of course. Yeah. Right.

AG:

So ensuring that your data doesn't have inherent bias in it and that it's a representative sample is another way.

MBS:

Yeah, interesting.

AG:

As you train models, as you look at model output, the model's only as good as the data, but all models are wrong. There's a quote-

MBS:

Mm-hmm. All models are wrong, some of them are useful.

AG:

That still is true in the world of generative AI. So having the ability to evaluate the accuracy of your model, sometimes it does take a human to go and label and look at output and statistically then look and say, "Hey, on average, what kind of accuracy do I have? When I'm not accurate. What is the magnitude of the error?" Am I going to ... Potentially, let's say you wouldn't want a surgeon ordering a surgery on the right leg when it's the left that needs, versus maybe I just put a band-aid on a cut that wasn't bleeding. I'm really bad at analogies. I'm trying to think of them off the top of my head, but it's that kind of like, what's the magnitude of what we're talking about, right?

MBS:

Sure, yeah.

AG:

You have to consider, and you just really want to have an understanding of your model's performance and the kind of implications that it has when you're wrong.

MBS:

It's super interesting. It really is. I mean, you touched on a really good point, that it is garbage in and garbage out. It is still ultimately a model and you control what kind of answer you're going to get in that model depending on what kind of questions you're asking or what kind of data you're putting in. So I think that's a really important part of it.

AG:

And I would just add, we have a responsibility to understand that performance. We have a responsibility to understand-

MBS:

It's a really good point.

AG:

... the quality of data.

MBS:

It's a really good point.

AG:

Yeah.

MBS:

I love the idea of a confidence score on how reliable or how confident you think the data is because it does give that quality factor on how much you should be able to trust this. So that's important.

AG:

And the use case matters, right? Some use cases, they're not as impactful. Others, you're talking about getting people into their homes. Right?

MBS:

Sure. Yeah.

AG:

So you want to make sure that whatever model output that you're generating, and for us, it's our customer, our clients, and ultimately, the homeowner is that end consumer, that it's accurate and fair and reliable.

MBS:

Yeah. So that then leads to the next part of the ethical use of gen AI. I mean, there's a lot of concerns about it being used maliciously, or just unintentionally perhaps even. So can you talk a little bit about what does it mean to be or to have responsible use of generative AI?

AG:

It means a pretty strong governance program.

MBS:

Okay.

AG:

I touched on this a bit. I talked about fairness and accuracy and transparency. The way to ensure that is a strong governance program. You want oversight. So one, it comes down to understanding what your organization is doing, what are the use cases we're going after, and having an early compliance and legal conversation around those things. Some are no-brainers. Others might need a little bit more support from a legal perspective, right?

MBS:

Yeah, yeah.

AG:

So at CoreLogic, we have a pretty strong governance policy. It's published for our organization. These are the expectations around getting approval for new use cases, the technology that's approved, the expectation around the R&D, and the types of write-ups and feedback back to the governance committee that's required. So that's all covered in our policy. And then we have a committee that gets together and oversees and reviews all of the different use cases, advances in technology, just ensuring that we are evolving with and keeping up with the changes that are happening in regulatory environments, that's happening inside our own organization. And then we use outside legal counsel, so not only is our internal legal counsel, but our outside legal counsel, and they're really looking for ensuring that we're following all the regulatory requirements, that fair lending practices.

MBS:

Sure. Yeah.

AG:

One thing that's interesting is, we of course use no data that would be about any protected class, right?

MBS:

Mm-hmm.

AG:

But in AI, and especially as we get into these more sophisticated techniques as a technology advances, there can be a lot of unintended or unknown bias that can be introduced. Right?

MBS:

Sure. Yeah.

AG:

The model is not a human. It's going to recognize patterns and it's going to produce output based on those patterns. So ensuring that a human is looking at, do I have disparate impact in the outputs of my models, can I confidently say that this is following all fair lending practices? That is a really important step that we take, and that's part of a governance program.

ES:

Before we end this episode, let's take a break and talk about what's happening in the world of natural disasters. CoreLogic's Hazard HQ Command Central reports on natural catastrophes and extreme weather events across the world. A link to their coverage is in the show notes. April 3rd brought a magnitude 7.4 earthquake to the coast of Taiwan. CoreLogic estimated that the insurable losses from the earthquake will be between 0.5 and $1 billion. As Taiwan is the global leader in semiconductor chip manufacturing, major damage to plant facilities or shipping networks could have had a significant global impact. Fortunately, Taiwan had little to no damage or disruption to chip manufacturing plant operations.

king year for insured loss in:

AG:

So I could go on and on, but it does come down to monitoring performance and having a strong model inventory and reviewing it annually. We attest annually to all of our models that bear in compliance.

MBS:

That's great to hear about us here at CoreLogic and what we're doing as leaders in the industry in many ways. But I think more broadly, there will always be trust issues. And I think having a strong governance policy and program in place definitely is something that's going to help. But do you think, across the board, are we ever going to get to a place where there's widespread adoption of gen AI? Or do you think there's just always going to be too much on, "You can't do this. You can't trust this. This isn't right"? What do you think is going to happen with the world in your crystal ball?

AG:

Well, I jumped into what I would say is our external use cases, right?

MBS:

Mm-hmm.

AG:

I jumped into generative AI as an offering, as a CoreLogic offering to our clients, and ultimately that changes the game for consumers, right?

MBS:

Okay. Yeah.

AG:

Getting the homeowner in their property faster. Generative AI, that could make the argument and tell you how we are going to help with that. But there's other use cases. So the user experience is kind of what I've been speaking to, how we can infuse AI into the property professionals' lives. But there's also things like customer experience, the chatbot, the ability to give 24/7 kind of customer service to answer general questions that someone might have around products or services. Generative AI, large language models, they're there. They've been there a long time. I mean, we've all been interacting with chatbots for some time.

MBS:

Yeah, for a long time.

AG:

There's also just generative AI to transform the way that we work. So if you think about assisted programming, helping to draft up marketing language, helping our legal to understand regulation and policy, like extracting from lines and lines and pages of regulation and code. So there's all sorts of efficiency plays with generative AI that individuals can choose to adopt to help them do their jobs better. But it's certainly, I don't see a world, Maiclaire, in which AI replaces the human.

MBS:

That is such a great place to end, Amy. I think AI, gen AI are here to stay. They're not going to replace humans. It's just going to continue to grow. And I think that two years ago, gen AI wasn't a thing that anyone was talking about, and now everyone is talking about it. So I can't wait to see where the world is going to go, where the industry is going to go. Amy, thank you so much for joining me today on Core Conversations: A CoreLogic Podcast.

AG:

Thank you for having me, Maiclaire. So much fun. I could talk about this for days.

MBS:

I love it. I love it. And thank you for listening. I hope you've enjoyed our latest episode. Please remember to leave us a review and let us know your thoughts, and subscribe wherever you get your podcast to be notified when new episodes are released. Thanks to the team for helping bring this podcast to life: producer Jessi Devenyns; editor and sound engineer Romie Aromin; our facts guru, Erika Stanley; and social media duo, Sarah Buck and Makaila Brooks. Tune in next time for another core conversation.

ES:

You still there? Well, thanks for sticking around. Are you curious to know a little bit more about our guest today? Amy Gromowski is the head of Data Science at CoreLogic, leading teams of data scientists and machine learning scientists in developing artificial intelligence and machine learning solutions, including computer vision and generative AI for property related solutions in the real estate, mortgage, and insurance markets. Over the course of her career, Amy has held various AI related roles, including data scientist, client executive, analytics product manager, and most recently as a leader of AI machine learning business development. With 25 years of experience, Amy enjoys working with C-suite leaders on AI and machine learning strategy, technology leaders, product leaders, and clients to innovate the property ecosystem.

Chapters

Video

More from YouTube