Artwork for podcast Fintech Focus
What Does AI Safety Mean for Fintechs?
Episode 216th April 2024 • Fintech Focus • Skadden, Arps, Slate, Meagher & Flom LLP
00:00:00 00:18:40

Share Episode

Shownotes

There have been big headlines in the world of AI over the past few weeks. 

In this episode of Skadden’s “Fintech Focus” podcast, host Joseph Kamyar is joined by Nicola Kerr-Shaw to discuss AI safety within the fintech sector. They delve into recent developments, including the EU AI Act and the U.K.-U.S. landmark agreement on AI safety. The episode also covers key takeaways for financial service entities regarding the EU AI Act, practical compliance strategies for fintechs, contractual risks that are starting to emerge for businesses concerning client-facing AI tools and privacy questions surrounding safeguarding personal data in the AI space. 

💡 Meet Your Host 💡

Name: Joseph Kamyar

Title: European Counsel, Corporate at Skadden

Specialty: “Fintech Focus” host and European counsel Joseph Kamyar advises on a wide variety of corporate transactions, including cross-border private mergers and acquisitions, fundraisings, joint ventures, corporate reorganizations and general corporate matters, with a particular focus on the financial services, technology and media sectors.

Connect: LinkedIn | Email

💡 Featured Guest 💡

Name: Nicola Kerr-Shaw

What she does: Counsel Nicola Kerr-Shaw, a key member of our global Cybersecurity and Data Privacy Practice and authority on AI-related issues, represents financial institutions, technology companies and other businesses in matters pertaining to AI, cybersecurity, data and privacy, and emerging technologies. She works in tandem with companies to creatively and effectively help them achieve their commercial goals.

Organization: Skadden

Words of wisdom: “Financial institutions have been using a form of AI for years and for a wide range of purposes. Financial services is also heavily regulated already. And so AI had therefore been developed within a sensible and controlled environment in a way which ensures continued compliance with financial regulation.”

Connect: LinkedIn | Email

Connect with Skadden

☑️ Follow us on Twitter & LinkedIn.

☑️ Subscribe to Fintech Focus on Apple Podcasts, Spotify, or your favorite podcast app.

Fintech Focus is a podcast by Skadden, Arps, Slate, Meagher & Flom LLP, and Affiliates. This podcast is provided for educational and informational purposes only and is not intended and should not be construed as legal advice. This podcast is considered advertising under applicable state laws.

Transcripts

Voiceover (:

Welcome to Fintech Focus, Skadden's podcast for Fintech industry professionals. The global regulatory and legal updates you need start now.

Joseph Kamyar (:

Hello and welcome back to Fintech Focus. Today, you've got me again, Joe Kamyar. I'm an M&A lawyer in Skadden's Fintech practice, and with me is Nicola Kerr-Shaw, who's one of our lead AI lawyers here in London. Nicola, thanks for joining.

Nicola Kerr-Shaw (:

Thanks, Joe. It's great to be here.

Joseph Kamyar (:

So plan for today, we're going to unpack the term AI safety in the context of the Fintech sector, and we've obviously had some big headlines in the world of AI over the past few weeks. So first of all, we saw the European Parliament approve the EU AI Act in the middle of March, and then this month we had the UK and US governments announcing their landmark agreement on AI safety, which neatly ties into today's topic. And I guess, when you look at some of the historic AI collaborations between the UK and US governments, there's traditionally been a heavy focus on national security, and the examples I'm thinking of are the partnerships between GCHQ and the NSA in the US. That said, when you look at the detail of the current partnership and, Nicola, correct me if you disagree, there seems to be less focus specifically on national security and actually much broader.

Nicola Kerr-Shaw (:

Yeah, no, Joe, I entirely agree. There's a huge focus on security within this MOU, but security is indeed broader than just national security. Isn't defined and doesn't appear to be limited to any particular type of AI or sector. So we have loads of questions we can ask ourselves. Is the MOU seeking to capture cyber security and the risk of attacks fueled by AI, or is it looking at security to individuals and the risk of harm caused to them by misuse of AI? Or are we looking at security of financial and other regulated systems and the potential threat to those systems and the stability of those systems caused by AI? And how is this collaboration going to work in practice? There's a lot of questions. We don't know a lot, but what we do know is that this is a groundbreaking agreement and will enable the UK's new AI Safety Institute to work collaboratively with its US counterpart.

(:

Together, they're going to develop what they are calling an interoperable programme of work and approach to safety research in order to achieve their standard shared objectives on AI safety. This is going to include developing strong methods to evaluate the safety of AI tools and their underlying systems along with information sharing with each other. So I think the definition of safety, going back to your point, has been left intentionally broad and is seeking to encapsulate many of the potential security concerns within AI. Generative AI is evolving at immense speed, as we've seen recently, and we don't know many of the risks. In fact, in the UK government's recent response to its AI regulation white paper, it specifically stated that it was waiting for any potential regulation of AI until its understanding of the risks has matured. So the MOU provides a framework to keep addressing this security as it evolves and while those risks are maturing.

Joseph Kamyar (:

I always find it interesting how the concept of AI safety can mean so many different things in so many different contexts. And I guess instinctively, I suspect lots of people, thanks to Hollywood, associate AI safety with the idea of robots taking over the world, but clearly in the context of financial services, that's hopefully less of a concern. Now your background, Nicola, is an interesting one as you've only recently joined Skadden, having spent a number of years in the tech arm of a global financial institution. So I'm curious to know from your previous life before Skadden, what were the sorts of areas and risks keeping you up at night from an AI safety perspective?

Nicola Kerr-Shaw (:

So my role and my focus was advising a broad range of technology and IP issues across a large financial institution and its group of companies. Technology did include AI, but at least in recent years that was nothing novel or new. Financial institutions have been using form of AI for years and for a wide range of purposes. Financial services is also heavily regulated already. And so AI had therefore been developed within a sensible and controlled environment in a way which ensures continued compliance with financial regulation. So AI safety was, and I think still is, a bit less of a concern for many banks.

(:

A key focus point surrounded understanding the algorithms and being able to explain them to any interested regulator. And it also involved having proper control points to ensure safe testing and development, understanding how AI can enhance services, for example, huge developments in detecting fraud and financial crime, but also understanding its limits were absolutely key. It's also fair to note that a lot of use of AI within financial institutions has been machine learning and bots rather than new, more difficult generative AI that has grabbed so many of the headlines this last year.

Joseph Kamyar (:

So I guess we're saying financial institutions have in some respects already been paving the way with examples of safely deploying AI, given the highly regulated environments they're operating in. And I think it's fair to say that we're not seeing a call to arms for fresh AI-focused legislation in the UK, and my sense of the government's positioning seems to be that we have a principles based approach that's adaptable and also sector and regulator led. Now that's pretty distinct, the proactive and much broader statutory approach that we're seeing in Europe, for example, through the EU AI Act, which we'll come on to. And when you then look at what the regulator's actually say, again, my sense is it seems to be little appetite amongst the likes of the Bank of England, PRA, FCA for a sector-specific statutory framework. And instead the focus seems to be on understanding how the patchwork of existing regulatory frameworks apply to AI technologies. So Nicola, what in your view are the most material touchpoints with the existing regulatory regimes?

Nicola Kerr-Shaw (:

Clearly for entities providing financial services in the UK, the FCA rules and SYSC will continue to apply and are key touchpoints. This is nothing new, but these rules need to be interpreted with a specific AI tool in mind. As with any technology within financial services, key focus points will be on market stability, maintaining due skilled care and diligence and evidencing that, operational resilience, consumer duty and treating customers fairly, and transparency with the regulator. The senior managers and certification regime will continue to apply and needs to be considered in relation to the development of any internal or external AI tool. In addition to this, the use of personal data within an AI model or system is also a key touchpoint for financial institutions. For some banks, particularly institutional banks, personal data is any incidental to the use case in hand, with the real focus being on the corporate relationship. However, the UK GDPR still applies in full and achieving transparency with data subjects in this context can be really quite challenging.

Joseph Kamyar (:

Makes sense. And I guess, my sense from recent discussions with regulators that you also present at is that we can expect to see at the very least, more guidance on how those regimes apply to AI. That said, do you actually think it's realistic that the UK continues to veer away from legislating for AI altogether? When you look at the industry feedback, the sense is that there's a push for a globally harmonized approach to AI regulation, and clearly that's not the case when you look at what's happening in the US and the UK vis-a-vis Europe. And a second question, if I can. Are there any aspects of the existing regulatory regimes where you can see gaps forming as use cases for AI amongst fintechs and banks changes over time?

Nicola Kerr-Shaw (:

These are such interesting questions, Joe, and clearly having global standardized laws is something we'd like across many, many different things, not just AI. I mean, I think there's a strong use cases for AI within financial services, including things like AI-based credit worth assessments, fraud and financial crime detection, market research and drafting research papers and risk assessments for insurance. The list goes on and on and on. And there's likely to be strong human input into most of these processes already to the extent that they use AI. And so that, along with the requirements of other financial regulation, is arguably enough. However, it should be noted that some of these use cases, particularly those which involve individual consumers directly, for example, risk assessment for health insurance of an individual, will be considered high risk AI under the new EU AI Act and so will be subject to heightened requirements.

(:

We're expecting to see further development by the European standardization bodies here and whether in fact there will be additional requirements for those within financial services. I mean, for the most part, the training and control framework are already in place within financial services and this is likely to satisfy much of the EU AI Act. So arguably, there's no need in financial services, at least, for additional regulation. Additional guidance, though, on how to evidence certain requirement or to comply with topics such as the GDPR would be very welcome. That all being said, for other industries, I think the use case against additional regulation is less clear, particularly with generative AI, and I think this is where the government should focus both in terms of additional regulation and guidance with how to comply with existing legislation.

Joseph Kamyar (:

So on the topic of legislation, we've obviously both mentioned the EU AI Act and now that's been formally approved by the EU Parliament within the past few weeks. It's obviously had various guises and iterations over a fairly protracted period, so perhaps you could give us a high-level overview, if that's possible, of where that piece of legislation has finally landed.

Nicola Kerr-Shaw (:

I mean, that's a challenge. It's a vast piece of legislation, 270 pages, I think. So a high-level overview is a bit of a challenge in the time, but here are some key takeaways. First of all, all AI systems need to be categorized on a risk-based approach. So there's minimal risk systems which are already widely used, such as spam filters. They are, in fact, AI and they are largely unregulated and will continue to be unregulated, proceed as normal. But then the other AI systems need to then be categorized according to their level of risk. There'll be minimal transparency and disclosure requirements to certain AI systems such as chatbots, and the users need to be aware that they're interacting with an AI system.

(:

There are then high-risk systems. Example of these within financial services could include things like AI credit worth checks or risk assessments for health insurance where we'll need to implement training, appropriate human oversight, and maintenance of technical documentation to ensure identified risks are mitigated. There'll also be a need for fundamental right impact assessments prior to deployment, which is very similar to what we've seen under the GDPR. Then AI systems which are prohibited as unacceptable risk, but it seems unlikely, in my experience at least, that financial service entities will be engaging with this type of AI.

(:

Other key headlines on the EU AI Act include the need to respect intellectual property rights within the use of AI. So could impact research taken from the internet, and this could also actually impact financial services entities in needing to potentially share trade secrets on where they source their research from. Entities will also be liable for breach of the Act even if the underlying technology is white labeled or provided by a third party. And another key point to note is there'll be a new AI office which will sit within the Commission and will be tasked with overseeing the most advanced AI models. I could go on, but in the time, I think that's a good summary of key highlights.

Joseph Kamyar (:

Fair enough. Well, more specifically then, so as I understand it, aspects of the Act will have extraterritorial effect. So if you are a UK-based fintech, what are the sort of things you should be thinking about in practice, particularly for those fintechs looking to deploy AI either internally or as an integrated part of their client-facing products and services?

Nicola Kerr-Shaw (:

I think there's a few things that all companies, including fintechs, can do now to prepare for the EU AI Act, but also to help ensure that they're complying with other regulation and legislation in other countries as well. Many fintechs will probably be doing a lot of these things already, but it's always good to have a checklist. Firstly, I'd suggest that companies create essential inventory of AI systems and models that are deployed and being developed within the entity, and this should include all systems and models regardless of whether they are internal or external.

(:

The inventory should contain details such as the purpose of the AI, what data and IP is being processed, how the technology works, what are the use cases and the intended purpose. So as much detail as possible. And it should also reference any data privacy impact assessment conducted in relation to that AI if it uses personal data, and also note whether third-party technology is being used as part of it. Once the inventory is complete, companies can then look at that list and consider whether the AI Act would apply to each system and model and to what extent, and then record that within the inventory. And they might also like to categorize each model and system in relation to which risk category it would fall under as part of the EU AI Act.

(:

Next, the existing technology, policies and procedures of the company should be updated to ensure they specifically cover AI, and this should include statements prohibiting certain types of AI and specific action that should be taken when certain data or IP is going to be used. It should also include requirements to keep the inventory updated and what review process needs to be undertaken before a use case can firstly be explored and then deployed by the company.

(:

I mean, to that end, I suggest fintechs consider implementing a AI approval process, similar to how many financial institutions review and approve external vendors. This process would perhaps include an approval committee, which could be virtual and in-person, but would have representatives from risk, compliance, legal, the business and the CISO team, and potentially, if there's personal data involved, should include the DPO also. And that committee would review all AI proposals and approve them before they go live. Finally, the inventory should be then reviewed in light of the EU AI Act and the company's processes as a whole to determine what gaps, if any, there are, and then form a remediation plan. We realize that companies have limited budget and time, so focus on the highest risk AI systems first and the biggest gaps and consider what new internal processes need to be created as part of this third-party contract should be considered to ensure the appropriate due diligence has been conducted, there's ongoing right of audit and there's proper contractual protections in place.

Joseph Kamyar (:

We've obviously been looking at this from a regulatory and legislative perspective up to now, but as AI is increasingly deployed in client facing platforms, it's interesting to see some of the contractual risks which are starting to emerge for businesses. The other month actually, I had a friend complain to me about a customer service chatbot that he'd introduced on his website, and the chatbot had actually started offering out discounts to clients, much to his despair. And it was interesting, particularly because it was around the time of the Air Canada ruling.

Nicola Kerr-Shaw (:

Yeah, I mean, that's right. The Air Canada ruling was in February, and in that one, a tribunal ruled that Air Canada was obliged to honor a discount offered by a chatbot, a further example of the measures and safeguards that firms will need to have in place to ensure that the outputs from its AI tools are accurate and not misleading. It's the same with chatbots used by banks and other financial services. If a chatbot offers a price on something or conducts a trade, but there was a technology error, can that trade or contract be undone? Often with services such as electronic trading, the electronic services agreement in place will state that trades can be unwound at the bank's discretion if there is a technology error. But that potentially becomes a lot more complex when facing consumers, individuals, rather than B2B. Also, there are questions like where do you put the terms and conditions of the bot and how do they become binding? Where does someone click to accept them when they're dealing with a bot?

Joseph Kamyar (:

So sticking with the topic of safeguards, but this time through a data lens, what are you hearing from privacy regulators in terms of safeguarding personal data and then the sorts of policies, procedures, measures that they're expecting to see in place?

Nicola Kerr-Shaw (:

Privacy, in my view, is one of the most complex topics within generative AI. There's a lot of questions and not a lot of answers. For example, we have extensive laws already around transparency of data processing and use of tracking technologies such as cookies, but how many people actually understand how their data is processed or what a cookie does? Most people, I think, click the X box to close the window and proceed to the website or click yes to state they've understood the privacy policy even when they have not. But how do companies address this in the world of AI where technology is even more complex than a website or a small piece of code dropped on a machine? And how do developers of AI achieve transparency with a data subject whose data they have used to train a model, particularly when they've not necessarily got a direct relationship with that individual?

(:

This is only one topic, and there are so many other potential privacy pitfalls, particularly around data minimalization, legitimate interest, consent, and many more. We've had some updated guidance from the ICO relating to AI back in 2023. However, following the government's response to its consultation on AI regulation, we are expecting to see more, including their strategic approach to AI. And the deadline for this is the 30th of April. Although just to note, we have heard through our network that the ICO will be increasing its own use of AI. Our bots will be checking your bots, so to speak, which is an interesting development. So I think this is an area to really watch this space.

Joseph Kamyar (:

Very good. Well, thanks very much, Nicola. Sadly, that's all we've got time for and really appreciate having you on the podcast today.

Nicola Kerr-Shaw (:

Thanks, Joe. It's been an absolute pleasure. Lots of interesting developments here and so much we could talk about. We haven't got onto the topic of AI in cyber attacks and what can be done to minimize legal risk here, so perhaps that's one for another time.

Joseph Kamyar (:

Sounds all right. And thanks again to everyone listening. See you next time.

Voiceover (:

Thank you for joining us on Fintech Focus. If you enjoyed this conversation, be sure to subscribe in your favorite podcast app so you don't miss any future conversations. Additional information about Skadden can be found at skadden.com.

Links

Chapters

Video

More from YouTube