Patrick Heinen:
Everybody's seeing what the technology is able to afford, but translating this into real business use cases, that's a real challenge for most of the companies.
Nora Hocke:
AI is everywhere. It's dominating headlines, and it's shown up in almost every conversation we've had this season about the future of fintech.
Annika Melchert:
So we wanted to go beyond the buzz and unpack how I could actually come to life inside financial services.
Nora Hocke:
This week on Fintech Files from BCG Platinion, we're joined by Patrick Heinen, Vice President of Solution Engineering at Salesforce, to explore the rise of agentic AI, what agents are, how they can be implemented successfully, and what this could mean for the fintech industry.
Patrick Heinen:
So I'm nearly 14 years with Salesforce, started as a solution engineer. I'm now Vice President Solution Engineering, running multiple teams, primarily in the architecture side, and that includes also our AI technologies. I have an architecture background myself and have been very much interested in using AI myself for a long time. And Salesforce has an AI layer which is more than 10 years old, so it's not a new technology for us. Of course, with the generative capabilities, it gained a lot of interest and we adopted it quite early. And at a sudden point, we had this idea, what about creating a podcast to help customers and prospects with the adoption of AI? Because there have been so much podcasts out there talking about the technology side, what is a large language model and how does it work and things like this. But there has not been much around how do you get there? How do you adopt this? What are the great use cases and outcomes?
Nora Hocke:
Exciting. It's also funny that you mentioned Salesforce had this AI layer for 10 years already because I was actually working as an AI consultant seven years ago at IBM and it was already a thing, but obviously it was different. So I'm keen to get your perspective on what makes this moment that we're in right now so transformative.
Patrick Heinen:
I think it's the first time where the pressure of adopting this technology comes from the employees and not from the C level. Because everybody's using AI in their private life, people have identified that they can actually save time with using this technology. And I think a lot of companies have been reflected with the fact that employees are using the technology, but they are completely out of control. They don't know what people are entering into the prompts and they don't know if data is going out of their safe perimeters. And that's the reason why a lot of companies have blocked websites like ChatGPT and Gemini, but not because they have a fear that people are not working anymore and AI is doing because they are simply out of control and they are responsible for the compliance. And in Europe, and especially in Germany, there is a high risk with this compliance through GDPR. And at the end, it's costing them a lot of money if they fail.
Nora Hocke:
So the big question is, how do you actually set up your organization to succeed with AI? What needs to be in place so this doesn't go off the rails?
Annika Melchert:
I think it really comes down to structure, being clear on what AI is there to do, who's accountable for it, and how it's managed once it's live.
Nora Hocke:
So it sounds like you need a clear framework to avoid people experimenting in isolation and leaders losing visibility.
Annika Melchert:
Exactly. And that's actually also how we think about it at BCG, supporting our clients with frameworks so that they can scale AI without things breaking.
Nora Hocke:
I know in your podcast you speak to a lot of C-level experts from academia as well. What would you say from your conversations, how are organizations shifting the way they think about or approach AI compared to other technologies?
Patrick Heinen:
That's actually a big task. So I think this year I had CIO conversation with 70 plus CIOs and their biggest challenge is, first of all, that they see, oh, this is a great technology, but there's also a lot of bubble in it. So there are a lot of companies coming to them and explaining them the big value. The biggest gaps, or I would even say the biggest mistakes companies are doing trying to find the right use cases is that they are starting with very complex use cases where they think that the AI is smarter than a human being, and they think the AI can do it better.
So think about it. You have employees which have a certain fear of adopting AI because they see the risk of losing their jobs. So we are always positioning our technology like employees and agents together. That's success. And if you position it like this and identifying very easy use cases like I have a manual task to do every day and I have to copy data from A to B and it's just a manual task. This is a pretty good example for building an agent which is just doing this task.
Nora Hocke:
I remember a time in which companies would then have tried to implement everything on a RPA platform up until the point where you can't maintain that anymore.
Patrick Heinen:
Exactly. And you are not building this agent which can do everything. An agent has a very single task and is very spotted on a single task. But you will over time build multiple agents and we have 36 agents and we have one master agent trained to understand what the other agents can do. And you are just talking to the master agent and the master agent will identify what's the best agent in the second role who will complete this task and will then forward the task to the other agent. And he's taking the communication with the other agents and that's very productive.
Annika Melchert:
So when Patrick refers to agents, what does he actually mean?
Nora Hocke:
He's talking about AI systems that go beyond responding to prompts. Agents can actually understand a goal, plan steps and take action, whether that's pulling data or triggering workflows or interacting with other systems.
Annika Melchert:
That makes agentic AI sound less like tools and more like digital teammates.
Patrick Heinen:
So it's like onboarding a new employee. You start with the role, you explain the agent with words. You are now a service agent and your task is X, Y, Z. You are working for the company X, Y, Z. This is the tone of language we are using. These are our products. These are... So the agent has the full context about the role. Then you give the agent access to a data. This is our order systems here. You can look up the orders. This is the products. This is our knowledge base. But out of this data, the agent is not able to search for anything else. If the answer is not in this data, the agent will not give any answer. And this is also reducing hallucinations because you can just make a decision based on your data. And if your data has a high quality, the risk of answering the wrong answer to your customers is pretty low.
Then you give the agent the tasks like you can do an order, you can look up a product, you can send a knowledge base article, whatever, and then you give the agent guardrails. If the agent identifies all of the tone of language from the customer side that there is an angry customer, the agent may result in, oh, I give you a discount or something like this. Because that's the way how large language models are working. There is a certain goal of if you are talking to large language models like tool ChatGPT, they're also very friendly. "Oh, that sounds very bad."
Nora Hocke:
Very reaffirming as well.
Patrick Heinen:
Exactly, exactly.
Nora Hocke:
Great idea.
Patrick Heinen:
And this is why we are putting in guardrails that you don't come to result and to a communication to your customers which you don't want. That's very important. And the last step is a channel. So how do you communicate with the agent? Are you using your mobile app? Are you using Slack? Are you using Teams? Are you using a chatbot on your website? So that's the way how you build agent and that's pretty simple.
Nora Hocke:
Sounds pretty straightforward. Almost easier than onboarding-
Patrick Heinen:
A new employee.
Nora Hocke:
A service center. Yeah, literally.
Patrick Heinen:
And you know what? This is an interesting discussion I had with customers. So of course, from a legal perspective, the answers the agent is giving is the risk of the company presenting the agent. So from a legal perspective, an agent is framed like a digital employee. So once the agent says you get 100% discount off your flight ticket because you have been late, the company has to pay the customer.
Nora Hocke:
It's binding.
Patrick Heinen:
It's binding. Exactly. So when we talk about technology like a service agent, then some customers are asking me, "Okay, who's taking the risk?" I said, "You are taking the risk."
"Okay, but we cannot do it." And I said, "But you can do this with your service employees. Who's taking the risk if a service employee does a mistake?"
"Yeah, but that's a human being." And I said, "Yeah, but if you have put in the right guardrails and the right actions to an agent, I would say the risk for an agent is even lower than for an employee because the agent is doing what you tell the agent and the employee not." That's the big difference.
Nora Hocke:
What would you say are foundational elements you need to have in place to make an agent work?
Patrick Heinen:
For an agent, you need semantics. Semantics is the way to describe the data. So for example, you have a number, one million, with a number, the agent can't do anything. If you add metadata like the number one million is the opportunity amount, that's adding metadata, but the semantics really make the difference because the semantics will explain an opportunity above one million is a hot opportunity and below one million, it's a cold opportunity. And then the agent can really understand what the data means and can get it into the right context. And what Salesforce has built is bringing on a data platform, Data360, bringing all this together. So we can connect every data source, even with zero copies. If you have a Snowflake instance, you don't have to move all the data to Salesforce to use this technology. We can just stream it in the context where you need it.
Then we enhance it with metadata and the semantics, and then we can provide it to an agent and the agent can use it. So that's one of the prerequisites. And then we have a trust layer where you can put in the information like what is the data from a compliance perspective which you are allowed to use. And you don't want to put this decision to your employees because either they don't know or they don't care. And that's the risk for the company. And if you think all these capabilities you need to run an enterprise AI, I would say it's 50 plus capabilities you need. Of course, you can build this all yourself, but what our customers also have identified is that the technology is evolving so fast that once they build it, in a few months it's commodity and everything's there, so it's just a waste of money and waste of time.
The innovation is moving so fast that you just have to adopt it, and that's a lot easier like building it all yourself.
Nora Hocke:
I really like this point because it sounds so simple when you say it out loud. Just add the business logic on top of the data, but in reality, we both know that's incredibly hard.
Annika Melchert:
Definitely. And you know Nora, recently BCG research actually put a number on this. A report from April '25 showed that about 75% of technology leaders worry about what they call silent failure, which means spending on AI without seeing real impact, because much of that business never makes it into shared systems.
Nora Hocke:
That helps explain why these efforts stop. When logic lives in people's heads or outdated docs, agents don't have the context they need.
Annika Melchert:
And that's why these conversations keep coming back to data and not just having more of it, but having high-quality data that actually reflects how the business works.
Patrick Heinen:
I think a lot of companies are fighting with their data and their data quality over decades now. If you have not the right data quality, you will also not be able to use an agent to get meaningful qualitive outcome. For me, it's a simple task because in most cases, the business is coming and saying, "Yeah, here IT, this is the data we are providing to you. Please build us an agent. We want to get everything out." But the IT is not responsible for the data quality because the business side is generating the data. So you have to put the responsibility back to the business and say, "Hey guys, if you are not producing qualitative data, we can't also help you with an agent." Yeah.
Nora Hocke:
Surprise. The basics still matter and you need good data.
Patrick Heinen:
Exactly.
Nora Hocke:
So what about applying all of this to financial services and more regulated industries?
Annika Melchert:
What we tend to see in practice is agents being used in very specific contained ways, like for instance, compliant checks, document review, or supporting loan and onboarding workflows. We recently had one example in the UAE where we implemented also like a credit card recommendation agent, which is a quite cool example how it can be in real life.
Nora Hocke:
And those may not always sound flashy, but they are high impact. And because those stakes are so high, these use cases force institutions to think carefully about risk from the start.
Annika Melchert:
Which is why the conversations quickly turn into frameworks, guardrails and trust.
Patrick Heinen:
I think it's very important for them to use the technology and use solid frameworks where these examples of trust layer have been implemented because they are highly regulated and the authorities will have a very strict look on how they are using AI and are they using it with the right data. And in most cases, when I started the conversation with those companies, that has been a high-priority topic. So how we can get rid of the risk using AI with the wrong prompts and the wrong source information. And when I was speaking to them, it was very clear that although their people are very good trained, that they will not put it in the hands of the employees using this technology. We have some examples with banks which are fully digital, so that you are fully autonomous from a customer side. So you're working with systems, you are minor working with people. It's pretty fast. It's pretty easy from the customers. I'm using an online bank myself.
On the consumer side, these are things where I think we will see a lot more banks moving into this. And of course, there will be always banks from my point of view, which also have a facility because you also have to see the demographics like there are older people that might not be used to just mobile application only. So this is one area. I think in terms of coming to investments like with stocks, crypto, even gold and silver, they are coming up also just digital solutions. And there I see a lot of advantage with agents because they can explain the complexity and they can put a lot of data together explaining the customers certain concepts.
If you go to an investment company and say, "Here's my money, do it for me," this I think is involved with a lot of manual steps. The customers also see and want to have, but I think this is an area where optimization is possible with agents.
Nora Hocke:
So one thing that I sometimes think about also when using AI myself is a bit the balance between it being very convenient, but then at times also not delivering to the quality that I would want it to.
Patrick Heinen:
Yeah.
Nora Hocke:
So, how do you think leaders specifically should apply AI going forward also in the phase of making critical decisions?
Patrick Heinen:
This is the topic that your highest goal should be to reduce the amount of data involved, to make it very specific, because the hallucinations you see on your private usage, the reason for it is if you put in 10 times the same prompt, you will get 10 times a different answer. That's a pretty easy test to do. And the answer might get into the same direction and with the same meaning in the result, but it's because the amount of data involved is so huge. And I think in a business context, you should use public large language models, but a large language model is you are primarily using the language interpretation capability of the large language model and not the source data. And that is the big difference. So if you reduce the source data and have a very high quality proven data with facts, then you will get immediately completely different results.
And if you then involve also on top of this, a good reasoning engine which is trained with your language, with your context, then you can reduce hallucinations to a minimum.
Nora Hocke:
Reducing hallucinations through better data and context is only part of the equation though.
Annika Melchert:
When agents are built on rapidly evolving models, the real work starts after deployment with monitoring, iteration, and long-term ownership.
Patrick Heinen:
If you think you build an agent and then you are done forever, that's a misinterpretation because everything around the agent, you see it in the change of the large language models. Then if you build agents, for example, on ChatGPT or some automations, it could be with a new release they are not working anymore and you have to pick them up again and you have to change things. So the technology is evolving very fast and this is why you also have to look at the agents. And what we did also built into the platform is a dashboard. So on this dashboard, you see the adoption of the agent, you see the feedback because we put in a feedback loop for your employees. They can simply say thumbs up, thumbs down, has the agent done the work or not? Did I get any value out of it?
And this is what you have to continuously monitor and also the problems. You can measure the quality, you can measure the adoption. And this is why an agent is following a lifecycle, a development life cycle.
Annika Melchert:
It's tempting to think of agents as something you build once and move on from.
Nora Hocke:
Yeah, that'd be nice. But as the underlying models change, the behavior of those agents changes too, which means they need to be monitored, adjusted, and actively managed.
Looking into the next five to 10 years, where do you think AI is bringing us to?
Patrick Heinen:
That's a pretty challenging question because I wouldn't even able to say what the five next month will happen, but what I definitely see is that the capabilities and the evolution of the large language models is slowing down. I think the generative technology is now at a point where everybody has seen the capabilities of it, but we need to get it into real value use cases. And this is the point where we are, and this is maybe if all of the tech companies, if we are not getting into adoption now and consumption, I think the bubble will pretty soon explode. So I can tell you out of our microcosmos, we have definitely been able to get the adoption up and we have over 12,000 customers worldwide using our agents already and there are so many great use cases and I personally see a real value in those agents.
At the moment in pretty simple use cases, but these simple use cases is wasting time of employees and also customers, and we can make the world a little bit better in just adopting those technologies. And if everybody then has more time for hobbies and vacations and things like this, I think this is a very positive outcome.
Nora Hocke:
I would love that. I'm signed up, definitely.
Annika Melchert:
It's super exciting to finally dig into AI this season, especially the topic of agents.
Nora Hocke:
Absolutely. And it's obviously one of the topics that's very close to my heart with AI and data. So Anika, I'm very curious from your observations in the Middle East region specifically, what are the use cases or even broader the areas where agentic delivers the most value right now?
Annika Melchert:
It's super cool to see how the whole agentic AI topic is booming here in the region. I'm not sure if you heard, but like a few months ago, the first Saudi native AI model has launched to really make sure that those agents are then also able to talk in Arabic and don't need all the English translations. So basically they're really pushing through making these agents sound as natural as possible, which is something I personally really love since it's really then not about having one international model that fits all, but really tailoring the agents to each specific region.
Nora Hocke:
It's pretty amazing and must be a huge enabler for the region, isn't it?
Annika Melchert:
That's super cool. And I feel also like looking at the projects we deliver as BCG, I think by now it's more than 300 agents across clients, which then really unlock everything a client needs when it comes to cost reduction, faster execution, and then that, of course, also the huge productivity uplift.
Nora Hocke:
Fully agree. Yeah. Also in Australia, I see similar movements and we're actually also working a lot with our clients together with BCGX on delivering agents or delivering gen AI use cases. Interestingly, in financial services specifically, I think one of the areas where you can actually get a lot of high value very quickly is the origination process, specifically credit origination because the nature of the process is you need to consider just a lot of inputs. Some of that's structured, but some of that sitting in documents like a land register, like a company profile if we're talking business clients. So there's a lot of documents, just a lot of information to process as part of that. And what AI is just really good at is processing a lot of information at once. So there's actually huge efficiencies that banks can unlock when applying AI in the origination process.
Annika Melchert:
That's super cool. And actually implementing these agents then together with our friends from BCGX, I think that's even cooler, not just doing the strategy, but really making it come to life.
Nora Hocke:
100%. Super exciting. And so dear listeners, if you're keen to know more about how to best implement AI, how to build successful AI solutions, BCG has an AI platform group, and they actually released a super exciting paper just recently on how to build effective enterprise agents. So we'll put a link to that in the show notes. Have a look and let us know what you think.
Annika Melchert:
This has been Fintech Files, a podcast from BCG Platinion.
Nora Hocke:
This season, we're digging deep into the groundbreaking ideas that are reshaping the future of Fintech. We've got amazing guests lined up, so make sure that you're subscribed and never miss an episode.
Annika Melchert:
Thank you so much for tuning in. We'll see you next time on Fintech Files.