Artwork for podcast Fintech Focus
Reconciling AI With Fair Lending and Consumer Protection
Episode 721st November 2024 • Fintech Focus • Skadden, Arps, Slate, Meagher & Flom LLP
00:00:00 00:14:00

Share Episode

Shownotes

When it comes to AI in the consumer financial services context, U.S. laws lag somewhat behind the technology. But as Darren Welch explains in this edition of “Fintech Focus,” regulators are indeed targeting AI. 

Skadden consumer financial services partner, Darren Welch joins host Joseph Kamyar in this episode of “Fintech Focus” for a discussion on AI in the fintech sector, specifically fair lending and consumer protection. Darren breaks down how regulators enforce transparency in lending decisions and how lenders can mitigate risk. He notes: “We’ve worked with a lot of clients on helping them set up their fair lending testing protocols and responding to regulatory inquiries about fair lending testing.”

💡 Meet Your Host 💡

Name: Joseph Kamyar

Title: European Counsel, Corporate at Skadden

Specialty: “Fintech Focus” host and European counsel Joseph Kamyar advises on a wide variety of corporate transactions, including cross-border private mergers and acquisitions, fundraisings, joint ventures, corporate reorganizations and general corporate matters, with a particular focus on the financial services, technology and media sectors.

Connect: LinkedIn | Email

💡 Featured Guest 💡

Name: Darren Welch

What he does: A consumer financial services partner in Washington, D.C., Darren Welch advises a broad range of companies and individuals in regulatory investigations, enforcement proceedings and examinations, as well as in civil litigation on all types of consumer financial services issues.

Organization: Skadden

Words of wisdom: “In September of 2023, the bureau issued a circular focused specifically on credit denials by lenders using AI. And that circular reminds creditors that they must provide accurate and specific reasons to consumers indicating why their loan applications were denied, including in circumstances where the creditor uses one of these AI models.”

Connect: LinkedIn

Connect with Skadden

☑️ Follow us on X and LinkedIn.

☑️ Subscribe to Fintech Focus on Apple Podcasts, Spotify, or your favorite podcast app.

Fintech Focus is a podcast by Skadden, Arps, Slate, Meagher & Flom LLP, and Affiliates. This podcast is provided for educational and informational purposes only and is not intended and should not be construed as legal advice. This podcast is considered advertising under applicable state laws.

Transcripts

Voiceover (:

Welcome to Fintech Focus, Skadden's podcast for fintech industry professionals. The global regulatory and legal updates you need, start now.

Joe Kamyar (:

Hello and welcome back to another episode of Fintech Focus with me, Joe Kamyar. And today we have Darren Welch, a consumer financial services partner at Skadden based in DC.

Darren Welch (:

Hi, Joe. Great to be on the podcast.

Joe Kamyar (:

So we've previously discussed AI safety on this podcast, but today we're going to drill down into one particular part of that which is specifically relevant to the Fintech sector and that's fair lending and consumer protection. And obviously, there's a whole host of use cases for AI in the lending space whether that's using in algorithmic models, fraud prevention, or even pricing loans. So, Darren, if you don't mind, I think we're going to try and pick your brain today on how regulators in the US are reacting to the increased use of AI in this context. So perhaps you could start by telling us is there a federal law or set of regulations addressing the use of AI in the consumer financial services space in the US?

Darren Welch (:

Not yet. There has been federal legislation proposed that would've imposed new requirements directed at anti-discrimination in AI sometimes called algorithmic fairness laws. But those efforts didn't get very far and nor have we seen comprehensive regulations from the financial regulators governing AI. It's frequently the case that the law is several steps behind technology, but I'm a bit surprised that we haven't seen more AI specific regulations from the regulators yet.

(:

Instead, the regulators are applying existing credit protection laws, many of which were passed back in the '60s or '70s to AI, and that doesn't always make for the best fit.

Joe Kamyar (:

So on that point, as I understand it, the Consumer Financial Protection Bureau in the US or the CFPB, as many refer to it as, well, they've indicated that it's going to use and is using adverse action notification requirements under the existing Equal Credits And Opportunity Act, essentially as a tool to increase lender transparency about AI. So perhaps you could start by walking us through those requirements and how it's being used to address concerns with AI.

Darren Welch (:

Yeah, Joe. That's right. One of the key consumer financial protection issues relating to AI has been transparency as to how these models work and how AI affects consumers. This concept has been referred to broadly as explainability. The CFPB is using the adverse action notification requirements under the Equal Credit Opportunity Act or ECOA, as I call it, to enforce that explainability concept. And for some quick background, Regulation B, which implements ECOA requires creditors to provide a written notification when they take adverse action against the consumer.

(:

And that includes declining in application for credit or making an adverse change to the terms or condition of account or denying a request to increase a credit limit. And that notification provided to consumers must include, and I quote, a statement of specific reasons for the action taken. The bureau has issued model adverse action forms, which includes some example adverse action reasons such as unable to verify income or delinquent past or present credit obligations with others.

(:

They're pretty short and to the point, but ultimately the creditor has to disclose the principal reasons for denying an application or taking some other kind of adverse action. And there's staff commentary from the CFPB that provides some guidance on how creditors can select those principal reasons when that adverse action is based on credit scoring.

(:

But that commentary hasn't been updated in more than 20 years. So it doesn't really take into account the advances and increased use of AI. So that's another good example of how the law usually lags advances in technology. Now that said in September of 2023, the bureau issued a circular focus specifically on credit denials by lenders using AI. And that circular reminds creditors that they must provide accurate and specific reasons to consumers indicating why their loan applications were denied, including in circumstances where the creditor uses one of these AI models.

(:

And this follows on the bureau's prior guidance where they had stated that creditors have to comply with the adverse action requirements even when the complex algorithms, and I quote, "Make it difficult if not impossible, to accurately identify the specific reasons for denying credit or taking other adverse action." So the bottom line is that even when using these complex models, lenders have to comply with law and explain to consumers the underlying substantive reasons why they take action against the consumers. And the CFPB and other agencies are very focused on this right now. And it's just another good example of how the agencies are using their existing authorities to regulate AI rather than issuing new regulations that address AI specifically.

Joe Kamyar (:

So this all seems to suggest the bureau has heightened expectations when it comes to both transparency and specificity where AI models are used for these purposes?

Darren Welch (:

Yeah. That definitely seems to be the case. For example, that circular I was mentioning says that if a credit lowers a consumer's credit limit or closes an account based on some behavioral data like the type of establishments where the consumer shops or where they purchased, it wouldn't be enough for the creditor to simply state purchasing history or something like disfavored business patronage as the adverse action reason. But rather the lender would have to disclose the type of business where the consumer made the purchase and what they bought that led to the adverse action. That's what the CFPB is suggesting. And that's way more specific than anything that's in their model forms. And it really seems to be, I think, a shift in expectations about how specific these notices have to be.

Joe Kamyar (:

That's interesting because it shines a very clear light now on the level of customers failings now taking place. I'm curious to know, are regulators saying much on that topic?

Darren Welch (:

Well, if you read between the lines, I think the point is that the regulators don't want lenders using that type of data in their models. Essentially the CFPB is using a heightened adverse action specificity standard for those non-traditional data elements to, I think, really go after those practices that they substantively don't like. And when I'm talking about non-traditional data, I mean anything that's not on like a standard credit report or something on the consumer's application and something that the consumer might not think is intuitively related to credit risk.

(:

So that could include digital footprint or the devices you used, educational attainment, how far you went in college, purchasing history like we talked about, geolocation, those kinds of factors. And so by increasing the level of specificity required in notifications, it's really going to shine a spotlight on the use of big data and model factors that go beyond the standard credit report and application data points. And as I mentioned, ultimately we expect the bureau is not only concerned about the clarity of the adverse action statements, but also the substantive issues like discriminatory or unfair practices associated with these models and those variables.

Joe Kamyar (:

So what does that actually mean for lenders in practice? How can they mitigate the risks in this space?

Darren Welch (:

Yeah, there's several things that lenders can do. First, they may want to make sure that they understand how their models work and what factors those models consider. And take a look at how those factors map to the specific reasons that they list on their adverse action notices. And if you're a lender, you can just ask yourself, "Does this really explain to consumers what the model took into account to reach a certain decision?" Because the bureau is really, they're laser focused on that question.

(:

Another big focus is on fair lending testing of models, and this includes assessing whether the model contains any prohibited factors such as age, sex, race, ethnicity, or factors that even aren't specifically prohibited but might be close proxies for those prohibited factors. And there are ways to test for this. And then also testing whether the models are creating a disparate impact on a prohibited basis. And part of that disparate impact framework is assessing whether there might be potential alternative specifications of the model that would serve the lender's business purpose, but with a less discriminatory impact.

(:

And this is called searching for, they call it less discriminatory alternatives or LDAs for short. And the CFPB and other regulators are very focused on whether lenders have a process for searching for potential LDAs for the models that they use.

Joe Kamyar (:

So on the fair lending testing, are there regulatory standards around how that should look?

Darren Welch (:

Unfortunately, not really, but there's no question that regulators expect lenders to engage in fair lending testing in their models. And they see this as one of the biggest things that lenders can do to guard against algorithmic bias and consumer harm from AI. And we've worked with a lot of clients on helping them set up their fair lending testing protocols and responding to regulatory inquiries about fair lending testing. And there are a lot of issues to take into account when you're looking at that.

(:

For example, how do you do this less discriminatory alternative testing? How do you assess when it's a model that's built by a third party where you don't have great visibility into it? And what's the standard for something whether something actually qualifies as a true less discriminatory alternative? Those are all really open questions, I'd say.

Joe Kamyar (:

Okay. So up to now we've been discussing, we've been focused on algorithms, but as we mentioned at the start, there's obviously a whole load of use cases for AI across FinTech verticals. And one of those which lots of people will be familiar with is the use of AI chatbots. And I think there's lots of examples now where those are actually being used to handle customer service inquiries. Have you seen anything on the regulatory front in terms of concerns raised around the use of AI in that context?

Darren Welch (:

Yes, definitely. That's another big focus for the CFPB right now. And I mean, I think we've all had frustrating experiences with chatbots at times. And this isn't just a customer service issue though. The CFPB has raised some specific legal compliance issues with chatbots. First, chatbots might provide inaccurate information about the product or the options, terms and conditions or loss mitigation, servicing options and so forth when customers have a hard time paying their loan. And that can raise concerns about potential deceptive practices.

(:

Second, consumers might invoke their statutory rights when using these chatbots to dispute a transaction. And then third, there's a specific section in the Dodd-Frank Act, Section 1034C that requires large banks and credit unions to respond to consumer requests for information about their financial products. And the CFPB has taken the position that requiring consumers to interact with a chatbot if that chatbot doesn't really understand or adequately respond to their requests, could violate that requirement.

Joe Kamyar (:

I guess a lot of these concerns, they seem to be based on an assumption that chatbots and so AI are less intelligent and not as accurate as a human being, but obviously humans aren't perfect, so much like use of a calculator, that feels to me like potentially a mindset that could shift overtime as people become more comfortable with the technology.

Darren Welch (:

I think that's right. No question chatbots can provide a lot of value and it's not like we're seeing institutions just abandon them in large waves. For now, I think there's a clear emphasis on ensuring that these technologies are robustly tested on an ongoing basis to make sure that the information provided through the chatbots is accurate and to adequately manage and try and rule out any risk that these chatbots might come up with these hallucinations or just totally inaccurate information. And in some circumstances, our clients are also looking at making sure that customers have an option to actually get to talk to a live person.

Joe Kamyar (:

So you've explained there's no federal AI consumer financial protection law, but what's the picture at a state level. We've obviously seen in other regulatory areas such as data privacy where states take the lead and potentially bring in more extensive regulations. So how is that playing out in this context?

Darren Welch (:

Yeah, it's a great question. We are starting to see some activity at the state level, although it's still in the early stages. One of the first laws is the Colorado Artificial Intelligence Act, which was passed in May of this year. And that act creates different sets of obligations for the users, or they call them deployers of models, and then a different set of standards for the model developers. And some of those key requirements include a duty to avoid algorithmic discrimination and implement appropriate controls to prevent that.

(:

Also, conducting impact assessments in the models, notifications to consumers about the fact that AI models are being used, consumer opt-outs in some situations, and additional disclosures when some type of algorithmic bias is detected. And that law doesn't take effect until February 2026, but our clients are already thinking ahead as to how they're going to comply with those requirements.

Joe Kamyar (:

Very good. Well, sadly, that's it for today, Darren. Thanks again for joining the podcast.

Darren Welch (:

Thanks, Joe. It's an interesting topic to follow. There are a lot of moving pieces on this.

Joe Kamyar (:

Definitely is. And thanks everyone for listening. See you next time.

Voiceover (:

Thank you for joining us on Fintech Focus. If you enjoyed this conversation, be sure to subscribe in your favorite podcast app so you don't miss any future conversations. Additional information about Skadden can be found at skadden.com.

Links

Chapters

Video

More from YouTube