📍
📍 Hi, I'm Drex DeFord, a recovering CIO from several large health systems, and a long time cyber advisor and strategist for some of the world's most innovative security companies. And now I'm president of this week Health's 229 Cyber and Risk Community, and this is Unhack the Podcast, a mostly plain English, mostly non technical show about cybersecurity and risk and the people, process, and technology making healthcare more secure.
A special thanks to our partners Fortified Health and CrowdStrike for helping us cut through the noise in this ever evolving cybersecurity landscape. Thanks for being a part of the community. And now, this episode of Unhack the Podcast
📍 While these stats change regularly. Major cyber security companies and other intel organizations are now tracking up to 300 unique threat actors, including more than half from nation states like Russia and Iran and North Korea, and another 50 plus that are purely ransomware or e crime focused, and several of those are focused on health care, and in particular U.
S. health care. The organizations tracking these adversaries pay close attention to how these bad guys behave when they're attacking, making note of the tools they use and the infrastructure they rely on and the progressions of their work. In other words, the way they move from one step to the next during an attack.
If you think about how an NFL player watches video of an opponent. To understand how the quarterback reads the defense, or how many steps he takes when he moves back from center, or how likely he is to move out of the pocket and run during a specific defensive scenario, you can begin to understand how the profiles of cyber enemies are created.
Security professionals are regularly creating these fingerprints of how specific adversary groups behave so they can better prepare and better defend against them. But lately, The bad guys have brought a new tool to the table, Artificial Intelligence, and they're using the new tech. Some of it has been built by the adversaries themselves, and likely some of it has been stolen.
They use these new tools to become more productive, to become better and faster, using the tools to help write computer code. or computer scripts, and doing all of that more quickly when they don't have access to technical teammates who code in that specific language. Or they might use generative AI tools to help craft well written emails to an American CFO, even when they don't speak English as their primary language.
They also use AI to build organizational profiles about potential targets and victims, identifying who's most likely to manage the organization's information or network infrastructure or banking info. They then build profiles of information on those individuals as part of a plan to social engineer that targeted person or even social engineer one of their friends or work associates as a way to gain access to systems or accounts.
One of the other ways they're using AI is to do deep dives on other caches of stolen customer passwords. Say, for example, a database that's been stolen from one of your favorite online retail stores. That stuff happens a lot. And how often do you get letters from one of your favorite stores telling you that they've had a cyber incident and that some of your information may have been exposed?
And here's some free credit monitoring. When those breaches happen, that stolen data from that store or business often winds up for sale on the dark web. And cyberthugs then buy that information and compare it to the names they're looking to exploit, using generative AI to do most or all of the mundane work.
Looking for a person match and then hoping that the stolen password associated with that person might've been reused by the individuals in their work environment. And that happens way more often than most of us would like to think. That's why we always say, don't reuse your passwords. Bad guys use those stolen databases along with AI to find your username and password and those stolen databases.
So they can try them on other platforms, including the place you work. That's just a few examples of how the bad guys are using AI today, and I have literally dozens of other real world examples that I can share with you, and will, on future episodes. Now one more thing, and this won't be the last time that I say this because with all the work going on with generative AI, we'll be diving into this topic regularly as the world unfolds this amazing technology.
So here's the thing, while AI makes the bad guys faster and better, It's actually doing the same thing for the good guys. And I'm going to talk to one of the good guys or good ladies on today's Unhack the Podcast.
📍 Hey, welcome to Unhack the Podcast, and I am joined by Shawna Hofer, the CISO at St.
Luke's in Boise. I'm really glad you're here. we've known each other for a while and we've done some kind of cool stuff together over the past couple of years. Why don't you start just by telling a little bit about yourself and the job and the role and the stuff you're doing? Sure. Yeah, Drugs, thanks for having me.
It's been great, both working with you and getting to know you over the years as well. So I work for St. Luke's Health System and I know there are And several St. Luke's health systems. So the one I serve is located in Idaho, based out of Boise. And, St. Luke's is community driven. We're a seven hospital system, several hundred clinics.
And we are the largest private employer in Idaho. And really, I think, have just a, an amazing opportunity to be that. That health system that people think about when they want to receive care in Southern Idaho. There's just something really cool about working for the company that everyone thinks about when they get healthcare, right?
It gives us a really good name. It gives us a great reputation and It really, from a cybersecurity leadership lens, helps me hire and retain because this is the place that people want to be. So yeah, I love that. and so you tell me a little bit about your background too, because you've been there for a while.
Yeah, I've been, this will be 10 years, at St. Luke's. I'm going in to 10 years. I've been leading cybersecurity for eight of those before, St. Luke's. and I make this joke all the time. I worked for Deloitte and St. Luke's was one of my clients, but really my least favorite client because they were just really complex and they were hard to do controls testing and really just to paint.
And now here I am all these years later, but, yeah, it's been a great journey. So your other clients were, not healthcare organizations? Is that part of the complexity? Must be, it must be, yeah. I know, it's it's definitely a thing. we're, one of the things we're going to talk about today is the artificial intelligence and the stuff that's happening with the bad guys in AI and the stuff that's happening with the good guys.
And the AI stuff too. just open it up for just, initial thoughts there. but what are you guys working on? the AI question is like everybody's talking about AI right now. What's happening in clinical, what's happening in business operations, that kind of stuff. But obviously, ultimately, we want to get around to what you're thinking about when it comes to AI and cyber.
You bet. from a business and clinical, It's already spreading like wildfire as it is everywhere else. And I think, this conversation a year ago, and it's amazing that it's already been a year that we've been on this fast track, but it was, let's start with small use and let's focus on things that are, the lowest risk from a patient safety lens, of course.
I think just like everyone else, we jumped in with Epic, and we jumped in, with Microsoft. the big ones that you felt safe doing that with. And as more and more use cases come up, you feel that Risk tolerance adjusts a little bit because the use cases that are being brought are so interesting and they're helping solve real problems, right?
They're helping solve problems that our caregivers are struggling with and are intriguing and So far, we have felt safe in pursuing those. yeah, we started with Epic and Basket just like everyone else. done some of the virtual center. we're looking into the ambient listening and really just some of the opportunities to give our caregivers back the time. And there's just, there's a ton of excitement.
Yeah, there's, the, there's a couple of things that I see happening. one of them is. People are trying to figure out how to take the boring part of their job out of their job by using generative AI to help them. The other part of this is that it turns out that tomorrow when you download the latest version of the application that you're using, whatever it is, of the, I'm guessing, 500 applications that you have in your organization or a thousand applications that you have in your organization.
Some of them are SaaS and don't even run on premise, right? That suddenly there's also a little AI button that says do something else. there was no use case for it. Nobody came to you with it. It's just a thing that appeared. How are you dealing with those? it's great and challenging at the same time because from a cybersecurity lens, you want to stay on top of all of that.
You want to be able to risk assess and make decisions about whether you bring in those capabilities. But as you mentioned, some of these are stats and you don't get to make decisions about some of the capabilities that they just turn on. And so I think, for us, yeah, we want to embrace that, but you have to really think about how to educate, and provide leadership. and guidance outside the bounds of just what people are going to have available to them. You just need to assume that it's all going to be there. It's going to continue. You're that's a train that we are not stopping. And so how really do you come around it? really maybe more from a human lens and from a technology lens.
Yeah, from the training and from the. This is what good behavior with generative AI looks like. think about these. And it's hard to do too, because Sometimes you feel like you need to lay out the 25 rules. English is a terrible language and can be vague and everybody is trying to figure out how to get the work done and so they try to live up to the spirit of the law but they might not, or the letter of the law, but they might not live up to the spirit of the law.
That's the challenge too. Yeah, but to create rules, right? All these different use cases keep coming up and you're gonna just keep coming up with new rules, right? at some point you have to just take a step back and say, we need people to think critically. Yeah, culture philosophy.
Yeah. And, I think that, one of the things that, that we're really starting to come around is that when we think about AI, yes, there's, there are new capabilities, but some of the learnings and some of the education and training that we need to come around aren't necessarily AI specific.
They're really just general guidance that we never felt the need to prioritize because it wasn't as prolific as it is now. So for example, something for us that we've had a lot of conversations around is the ability to very easily record and transcribe. Meetings, and how the risk around that transcription, if that transcription isn't accurate, How does it get shared?
Who does it get shared with? Who gets to store that? That isn't an AI challenge. It's more prolific now that it's embedded in the different technology that we have. But that's really a governance decision around, how do you manage and when are you allowed to record things? And so it's really some of that for us is understanding, what of this is AI specific and how do we lead from an AI lens versus what of this just needs to be embedded in the day to day?
Yeah, it's, it's interesting, we all have had for years a data governance problem in general, right? okay, we have this data element in five different systems. Which one is the source of truth? That version of sort of data governance and prioritization. And now along comes AI, and we have this whole other sort of challenge and issue of blocks of data, like you said, transcriptions in mass that we also have to figure out how we're going to.
deal with that, and the content of that, and the sensitivity of some of the things that are in those conversations. In some ways it's unearthing, like the dirty little secrets that we never really got around, right? And now we can't hide. Taking this back into the direction of cybersecurity, you are doing, um, you have a great cybersecurity program.
You've made great progress over the years. continue to turn up the volume and lean into it there. You've built a, a great solid team. Like you said, sometimes the reason that people want to come to work there is because you are the, that's the place that everybody knows. So you've been successful in kind of building that team too.
What are you thinking about AI? How are you using AI as part of the cybersecurity work that you've built out there? Yeah, I think for us, we're still trying to figure that out. I think trying to get the team comfortable using it in day to day is really our starting point and really learning and flexing those boundaries. if there's anyone who I trust in doing that, it's the cybersecurity team, certainly. And I think, since we're so early on, it's a good time to do that dabbling before we dive in too far to Now, how do we rethink the way that we work with this? Because I think part of, the capabilities that I'm excited about that have, that are down the road with cybersecurity and AI, most likely are going to come from some of those big partners.
And I think they're all trying to figure that out and what that looks like. And certainly what it will cost will be part of that angle. and so I think We catch up and learn and figure out what we want out of it. So to, we'll be our vendors catching up and figuring out how we're going to offer it so that by the time we're ready, we've got just great options to choose from.
So we haven't dove in on the cybersecurity side of it. In selecting a partner or partners and what we want that to look like. I would say for us, our priority really has been how do we serve the business and how do we help the business move forward? on that side so far. it's interesting because some of the things we've talked about are also the things that happen in cybersecurity.
So there are artificial intelligence. capabilities that are built into some of the products that you use. And it may not even be, entirely visible to you because it's the way that the company, your partner now is working in the background to do some of the work that they're doing. Some of them are visible and are turned on for you to start to use to understand.
And some of them, like you said, are going to be a new Yep. It's your line item in the invoice if you want to be able to do it. So in a lot of ways, it's a lot like maybe the EPIC conversation of boarding a PTHO in first. Yeah, exactly right. Yeah. and I think it's going to be really important, right?
Because part of this is, and what you and I have talked about, and I think what we all know is that the fear that comes with artificial intelligence is that It's going to be a tool in the pocket of the wrong hands, right? the bad guys, as you were mentioning. And, I think we have to meet that, right?
We have to use the same tooling, in order to be on top of that. And so I think it'll be interesting to see the evolution of capabilities. as you mentioned, some are going to just come with the products and some are going to come with the suite because that's what we're going to be buying in on in the future is how easy can you make this and how quickly, can we move?
And, It'll just be, it'll be part of what we're buying overall rather than a single line item. And I'm, I would anticipate the conversation looks a lot different, even a year from now. Is that's an expectation. It's not an add on. it's funny to think about. If we go back, and we don't have to rewind very far, but just like a year ago, we started talking about generative AI. maybe a little more than a year ago, hard to specifically put my finger on it. Yeah. But then we look at where we were then and where we are now, and the sort of massive leap that's happened. in a really short period of time. And we've integrated some of these tools into the work that we're doing.
But it's so hard to look forward, right? What does it look like in a year? What does it look like in two years from now? This is going to be a whole different, if we talk about this again, probably even in six months, it'll be a whole different ballgame. I agree. And I've been reflecting on that.
What does it look like one or two or three years from now? And I think about what does it look like? Five to 10 years from now, because I look at the way, even that I love leveraging it, It, as you said, it takes away some of the mundane tasks and I find that, I'm leveraging it to do things for me that just takes time in my brain. I can do them so much faster. And I think about what is that going to mean for me and my skills and my capabilities. Five or ten years from now. And not just me, but I look at like my kids, who I have young kids, who are going to be growing up with this technology. What is it going to mean for them? are they ever going to learn those mundane but necessary skills, that I'm now just bypassing? because they're going to have these at their fingertips. It's so interesting to think about what the future will be with this just in our day to day. I wonder about I grew up in an era where you had to learn all the math.
And you couldn't use a calculator, and then calculators came along. Calculators were allowed in the classroom. And you were able to do your work more quickly because you made the assumption that this tool was going to be there and available to you. And then we went into this era of the internet came along, and what the, I'm sure these same conversations were like, What are we going to do?
These kids these days, they're not going to know how to use dictionaries. Now, here we are talking about data governance and these are really hard problems, but I feel like that's the upscaling ultimately that our kids, our staff, other people on the team are going to. hopefully it does turn into that, right? I don't have to learn how to horse shoes and fix buggy wagon wheels anymore. That seemed to be a really important skill at some point in our history, but maybe that's not going to be, those are not, the equivalents of those today will not be the skills in the future. And if you think about that in, from a lens of a cybersecurity team, I wonder what those skills look like in the future too, right?
Thinking about, how do you support like a stock environment, for example, those mundane tasks, right? But today. either you've had to build out a massive SOC in order to support that, or you're outsourcing that. What does that look like, right? Are you still having to do that? Can you bring that in house and you just have the team to offset it?
Are those skills being built into the, to the outsource SOC? I just, I think about the potential to truly transform a lot of the services that we're used to today. I think, like you said, six, twelve, two years from now, six to twelve months or two years from now, it'll be interesting to see, I think, how quickly this has evolved, even in, like you said, the cybersecurity services.
Yeah, I think that the other interesting part of this is to think about a lot of the things that we do today in those SOCs or in IT operations and in the spirit of all things are connected together. the AI bot version of this being able to have one service talk to another service or one app talk to another application and keep each other up to date.
You know that. our generative AI buddies or whoever that are going to be out there keeping everything up to date. It also creates a situation maybe of more of a cyber security utility that you will be investing in that may come from your partners that aren't things that you have to do anymore. we used to build, I'm going to get back to these old analogies, but we used to build water wheels and, so that we could generate a little bit of electricity at our house. We don't do that anymore. We plug into the grid. And I wonder if AI is going to help us get to more of that cybersecurity utility over time.
I hope so. I think there's opportunity. I think there's a ton of opportunity. I think there's an opportunity to upscale kind of our teams. and I think, there's even going to be some utility in securing just the workforce. If you think about that AI body, right? and think about just some of the risks that we have today. I think some of our partners are probably already moving in this direction, one of the, one of the risks we face in healthcare is just simple mistakes that people make, right? Hey, I sent an email to the wrong person and the future of that AI buddy saying, Hey, this looks like the wrong email address.
And I think we're already moving in this direction, right? but to be able to expand that even further. I think it is going to help us in many aspects. And then, and not just the cybersecurity folks, when it comes to security, really everyone in the workforce. Yeah. Wouldn't it be awesome if you had an assistant who was saying things to you, I know you're trying to save this and you're trying to put it in that.
in that shared folder, but that's not where data, a spreadsheet with that kind of data can't go into that shared folder. I would suggest that you put it here or, give you some other direction, right? Help make it easy to do the right thing and make it more difficult to do the wrong thing.
Because often, like you said, the wrong things are done completely unintentionally. People are not, with malice, trying to, Make mistakes. They just make mistakes. They're busy. They're trying to get right. I know there's a bunch of other stuff for us to talk about. So I'm going to ask you, what do you want to talk about?
Is there something that you've had, on your mind that we've had this planned for a while? Is there something that's on your mind that you wanted to talk about or ask the audience to, think about? Yeah. I think what I'm interested in is And one of kind of my biggest challenges, I think, in coming around the confidence to help the organization pursue this is what do our teams, what are our cybersecurity teams need to be doing differently or are thinking about that we aren't yet? that's the unknown, unknowns, I think, is what's plaguing me and leading, the cybersecurity aspect of this in my organization is what am I not thinking about? I find myself asking that question and of all of the discussions that I've been part of around generative AI and artificial intelligence so far. it seems like we're doing what everyone else is doing, but it doesn't seem like enough for some reason. Is there a blind spot? What am I missing here? Yeah. I don't know. I'm curious if you've had any conversations that were ahas for you of, oh, this is different. We really need to come around this. so like just the, there are these things that happen and this is where I usually get the aha moment. The bad guys are super innovative and creative and they think about ways to use this technology that, until they use it and you're like, Oh, that's terrible. Or that stinks. I happened to be in Vegas last year when the breaches happened with the big casinos. there were a lot of people who were calling me and Oh, you're in Vegas. I know what's happening. Yeah. But no, I didn't have anything to do with it. but I did have some of the experiences that customers had while they were in the middle of it. And this idea of, social engineering, the help desk, and a lot of this had to do with.
having great databases of information that they bought off the dark web. I'm assuming here, but being able to answer those questions and sound like the person that they were supposed to be, or the person they were pretending to be on the phone and be able to get past those, past those folks.
You, have you had some of that? Stuff happened to your help desk. Yeah, that's the other thing too. The copycat thing that happens once one of these works, then every cyber thug says, it'd be a good idea. We should get in that business. I know, and it's, It's funny you say that kind of that copycat thing. and we can cut to subtext, but my best friend works in the film industry and her and I have talked about writing a movie together, writing like a cybersecurity movie and as exciting as that sounds, I'm also terrified because of the reason of copycats, right? Like I don't want to give away to anyone who maybe isn't there yet. hey, this really terrible thing can happen. And by the way, it's really easy and it's incredibly impactful. it's so true. And then I think. Yeah, we're gonna see, sadly, a lot more of that going forward. It's a good, yeah, these things that are good plots are for movies or things that you think about as good plots for movies or books.
This is also maybe some of the reason that We don't talk to each other as much as we probably should when it comes to collaborating around. If you're having a cyber incident or a bad experience, sometimes it's really hard for an organization to talk to other organizations about that because, some of it is just whatever the embarrassment, the legal exposure, whatever the case may be, but you also may not want to talk about it publicly because then it spurs this whole new environment that you had no intention of helping the bad guys with. And that's not unique to cyber, right? that is true of all types of crimes, obviously. but yeah, that's certainly I would say, part of it, that publicity of the benefit of learning, The challenges that healthcare has related to cyber security, it's important that we're talking about it, it's important that we're doing the right things, and also it's like simultaneously scares me that we're saying, Hey! yeah, it's in the healthcare environment, right? yeah. I think the, the opportunity to get together and, in trusted circles, have those conversations makes a big difference. it's hard to stand up in a conference and talk about some of these things to a bunch of people in the room that you don't really know, or you don't really know what's going on there.
Yeah. and that I think is something that is IRIS is so special, at least in my experience in the healthcare cybersecurity community, is that those circles, they, one, they exist. There are many of them and they're strong, right? They are. It's full of people who want to help each other. And I think that's, that's really cool.
That's pretty amazing thing. Yeah, I know. I love that too. Okay. So maybe that'll be the final word. All right. Thanks for being on today. I really do appreciate it. it was a great time. You bet. Thanks for having me and I hope to come back. you definitely will. I'll talk to you soon. Thanks.
📍 📍 That's a wrap for this episode of Unhack the Podcast. Do me a favor and and share this episode with your peers. And by the way, your feedback matters, so please subscribe and rate, leave a review wherever you listen to podcasts. 📍 huge shout out to our sponsors Fortified Health and 📍 CrowdStrike for supporting our mission to transform healthcare one connection at a time.
Find out more about their work at ThisWeekHealth. com slash Partners I'm your host, Drex DeFord. Thanks so much for spending some time with me today. And that's it for Unhack the Podcast. As always, stay a little paranoid and I will see you around campus.