Sarah Clarke, a technology governance specialist, joins Jo to discuss the complexities of AI governance and the critical need for organisations to enhance their AI literacy. As the landscape of artificial intelligence evolves rapidly, Sarah highlights the importance of understanding how AI systems function and the potential risks they pose. She emphasises that while generative AI tools like ChatGPT offer exciting possibilities, they also require careful consideration regarding data privacy, ethical implications, and the human oversight necessary to ensure their safe implementation. Sarah shares insights from her extensive experience in governance, risk, and compliance, advocating for a proactive approach to managing AI technologies within organisations. The conversation also touches on the gender dynamics in tech and the need for inclusive communities to support diverse voices in the AI field.
Takeaways:
Links relevant to this episode:
Companies mentioned in this episode:
And the other side is, I don't want anything to do with it.
Sarah Clarke:It's too complicated.
Sarah Clarke:I've got a day job, I just need to do.
Sarah Clarke:I'm being told this is going to make things better and faster.
Sarah Clarke:I have no idea how.
Sarah Clarke:I've got routines, I've got data.
Sarah Clarke:I understand.
Sarah Clarke:And now this is doing something different.
Sarah Clarke:That is, I think, probably fairly well representative of what's happening inside organisations as well, especially smaller organisations.
Sarah Clarke:You're listening to with aifm.
Jo:Hello and welcome to Women with AI, the podcast dedicated to amplifying the voices and perspectives of women working in the field of artificial intelligence.
Jo:My guest today is a technology governance specialist who focuses on simplifying the requirements for AI governance.
Jo:So I'm really looking forward to explaining what all that means to me and you.
Jo:But before we get into that, let me tell you a little bit more about her.
Jo:Sarah Clark is a Senior Advisor to the World Ethical Data Foundation, a guest lecturer on vendor governance for Manchester University, and has been an independent consultant for the last 15 years across a range of sectors.
Jo: l and electronics engineers, P: Jo:Sarah has also been listed as one of the most influential women in UK tech for the last three years.
Jo:So I'm delighted to be speaking with her today.
Jo:Sarah Clarke, welcome to Women with AI.
Sarah Clarke:Thank you.
Sarah Clarke:It's so nice to be here, Jo.
Sarah Clarke:I really enjoyed the chats running up to this.
Sarah Clarke:I'm pleased to have another one.
Jo:Oh, brilliant.
Jo:It's great to have you here.
Jo:I mean, for anyone that doesn't know and what you do, can you please sort of start off by telling us.
Jo:Yeah.
Jo:Who you are, what you do and your journey into working with AI?
Sarah Clarke:Yeah, sure.
Sarah Clarke:I've had quite a long circuitous route in this direction.
Sarah Clarke:I started a couple of decades ago looking at computing in general and then cyber security, and then I worked with various businesses at various levels doing governance related activity for that sort of trying to make sure we could get across everything we needed to do and understanding what standards and regulations were and what testing we could do, etc.
Sarah Clarke:And I ended up working for myself after working for financial services for quite a long time.
Sarah Clarke:And the focus has always been in what I do in that sort of governance, risk and compliance space, of really respecting the fact that everybody's really busy.
Sarah Clarke:We need to uplift everyone's skills to be able to help us and we need to design really accessible ways to do what we do, to gather the information we need and to interpret that back to people about what are the rules that need to be followed, but also where we can be pragmatic with some of this stuff.
Sarah Clarke:And when it comes to AI, I've always worked around things like big data projects and data analytics projects, essentially just trying to get value out of lots and lots of complicated data, whether that's monetizing it for various reasons or trying to get intelligence out of it.
Sarah Clarke:And I could see with the explosion of generative AI, that this was going to be an enormous challenge because the nature of the beast and prior to that, I mean, I've been focusing on this for the last four years before ChatGPT landed in the way it did.
Sarah Clarke:But we are seeing a sea change in what we need to do in terms of helping people to understand and in terms of needing to govern things.
Sarah Clarke:So that's a whistle stop tour of where I am now, but with a big dose of volunteering to organisations like the World Ethic of the Data foundation for Humanity, ieee, because you can't do this in isolation, that's another big deal.
Sarah Clarke:We need to talk to other people, we need to build our communities and networks because we can't know everything.
Jo:Yeah, fantastic.
Sarah Clarke:Because.
Jo:Yeah, so you were former director of.
Jo:For Humanity, and that's how I got in touch with you, because Joe Stansfield, one of my previous guests, recommended I speak to you.
Jo:And yeah, as you said, we've had some chats leading up to this and there's so much to cover.
Jo:And before giving anything away, I mean, maybe we could sort of jump straight into the work you do and sort of how we can involve and engage people to sort of make sure if they're using AI, how do they understand how it works?
Jo:I mean, does anyone know how it works?
Sarah Clarke:I think that's a really good question.
Sarah Clarke:I think applying data and computing power at scale to these latest generative AI models like ChatGPT and like things like Claude and like other systems that people enjoy to use, that's led to an explosion of usage because of the way it's been released at scale quite quickly.
Sarah Clarke:And no, I don't think many people really understand how it works.
Sarah Clarke:And the implication with a system that's that complex, where even the creators are not quite sure how they can refine it and govern it and manage it, is to show people what's happening in context of their day jobs.
Sarah Clarke:And that really is the focus of what I do, is getting a few Vital pieces of information to find out who needs what kind of help in an organization, what might be a priority.
Sarah Clarke:What are we maybe planning to do at scale?
Sarah Clarke:What are we maybe planning to do with lots of sensitive data?
Sarah Clarke:What are we maybe planning to point at lots of vulnerable people and playing out scenarios of people to try and identify any challenges, any risks, Maybe discover some great opportunities in that.
Sarah Clarke:Because you have to bring the people who will be using it in anger.
Sarah Clarke:The people in the customer service function, the people in the IT function, if it's for it, the people in the retail function, you need to show them what it means to them.
Sarah Clarke:They are the people who understand the so what questions when you say this is amazing, this can do this, and then they'll know that, okay, if we 10 times the amount of throughput for a conversation with customers, we can get rapidly get through what we're doing a lot quicker.
Sarah Clarke:They really like it.
Sarah Clarke:Do we have people downstream?
Sarah Clarke:If there's an extra question that needs a human, do we have the amount of capacity in systems?
Sarah Clarke:Will it talk to our old systems?
Sarah Clarke:These are all elements of complexity that we tend to only work out once things are in and working with something very novel.
Sarah Clarke:And so my work involves talking to people, understanding what they're planning to do if they have a plan for what they're going to do, as opposed to just use some new generative AI and try and pull to the front end of that conversation some of those considerations of, you know, are your systems up to it?
Sarah Clarke:Do your people need training?
Sarah Clarke:Have we got capacity?
Sarah Clarke:Do we need to think about data protection?
Sarah Clarke:Do we need to think about cyber security, all those kind of things?
Jo:Yeah, because I guess people don't.
Jo:I mean, I know it's everywhere, isn't it, at the moment, AI and I kind of feel like I'm trying to find out what does it mean and how best to use it.
Jo:And you know, I have been using ChatGPT and, and trying out different versions and you kind of.
Jo:I've learned, but only from speaking to people like you, that, you know, they hallucinate or they don't know what they're doing or they're starting to make things up.
Jo:And there was something in the news in December about some, you know, the government using it to look at benefits and that kind of thing.
Jo:And it's like, oh, this is going to help, it's going to make everything easier and sort of saying, but it's all right because there's a person checking it at the end.
Jo:But if there isn't A person checking it, or if the person thinks, oh, actually this is a really great system, it's saving me loads of time, it's obviously correct.
Jo:I'll just, you know, take it on face value.
Jo:Yeah.
Jo:How do people know that they need to be sort of still being the human and looking into it.
Sarah Clarke:I think like you just, just nailed something there.
Sarah Clarke:Jo.
Sarah Clarke:Honestly, it is a very different job working things out with critical thought in a system you understand very well.
Sarah Clarke:Like if perhaps you are a benefits claim handler, you understand what happens to the people involved, where it can get complex, which other agencies and functions you need to contact to get supplementary information, that kind of thing.
Sarah Clarke:If you have an AI that is taking chosen elements of historical similar situations and spitting out a recommendation, or it's been trained in a quite granular way to follow your own systems and processes, there's still going to be lots of exceptions.
Sarah Clarke:And even if there's somebody in the loop to review it, reviewing something that is convincingly well articulated and formulated and trying to pick out where there might be a gotcha or how to justify divergence from what a system recommends is a different thing to working in a system or process that you're really, really familiar with.
Sarah Clarke:You've got muscle memory for you.
Sarah Clarke:You really, really grasp where the pressures are, where the blockages are, where the bottlenecks are.
Sarah Clarke:One of the other illustrative examples is not in the more novel end of AI, it's more about rules based systems and algorithms and machine learning historically.
Sarah Clarke:And describing that difference is really difficult as well to help people understand different kinds of technology.
Sarah Clarke:In healthcare claims reviews, there were situations where there were lots of specialist medics who were reviewing the algorithmic recommendations for care claims, but they only had maybe one to two minutes to review each case.
Sarah Clarke:And there wasn't necessarily a very nimble or well respected feedback loop for them to then push for change, where they saw anomalies that needed more investigation and saw other things that might cause some harm done down the line.
Sarah Clarke:So it's as much about the surrounding people and processes.
Sarah Clarke:There's a bit too many S's there, people and processes as opposed to just the core system.
Sarah Clarke:So in that way I don't think it differs from a lot of historical tech change.
Sarah Clarke:And the people who, as I said, use it in anger, the people who understand their day job have that muscle memory, know who to call, know who, knows everything in the organization.
Sarah Clarke:They're your friends in this.
Sarah Clarke:Yeah.
Jo:So I guess it's like you don't have to be the technical expert to be using AI, but you need to know that there's someone in your company or there's someone that's got your back that is going to be able to.
Jo:The person.
Jo:And also, we don't want to feel stupid, do we?
Jo:Because you don't want to be, you know, accepting the data and thinking, oh, that doesn't sound right, but it must be right because it's.
Jo:Because it's the computer telling me the computer says it, you know, AI is telling me it, so it has to be right.
Jo:So I think that's.
Jo:That's something, isn't it?
Jo:To like, if we're engaging, if you're engaging staff or getting people to use it, they need to understand that they can ask questions.
Sarah Clarke:Oh, yeah, yeah, completely.
Sarah Clarke:I mean, there is something that's acknowledged and known as automation bias that, you know, because the computer says it, it must be right.
Sarah Clarke:And I think there's a.
Sarah Clarke:There's a big element of exactly what you said, not not wanting to look stupid or not having someone who can help you translate your concerns into something that can sort of end up being a help desk ticket or something like that.
Sarah Clarke:So we have a whole range of functions that need to have their skills uplifted to be able to interpret what's happening with the usage of different AI and machine learning.
Sarah Clarke:That.
Sarah Clarke: February in: Sarah Clarke:Is a need to uplift AI literacy in the organization.
Sarah Clarke:It has some caveats and conditions for who needs to do what and why.
Sarah Clarke:And a lot of people are debating, you know, what exactly that requires to show that you've done what you need to do.
Sarah Clarke:And it doesn't directly apply to us in the UK necessarily, because we're not in the EU anymore.
Sarah Clarke:But if we're planning on using data belonging to EU citizens, or we're planning on pointing things at EU citizens, or we're doing any of our development in the eu, then it will apply to a certain extent and we need to meet people where they are.
Sarah Clarke:I know from speaking to, you know, friends, family, neighbours, they tend to run when they see me coming because I tend to use them for benchmarks for, am I, Have I disappeared too far down a rabbit hole for you to even see me at this stage?
Sarah Clarke:But they, they're all saying they've roughly split between, wow, this is cool.
Sarah Clarke:I love what I can do with these models that I can get on My phone and my computer and look at this and look at that, look at the other.
Sarah Clarke:So that's amazing.
Sarah Clarke:But I'm the one who, you know the bit of the Pollyanna going, yeah, but are you sure that's right?
Sarah Clarke:Can you check a source on that?
Sarah Clarke:Is that actually what you need or is it just interesting?
Sarah Clarke:And the other side is, I don't want anything to do with it, it's too complicated.
Sarah Clarke:I've got a day job I just need to do.
Sarah Clarke:I'm being told this is going to make things better and faster.
Sarah Clarke:I have no idea how.
Sarah Clarke:I've got routines, I've got data, I understand.
Sarah Clarke:And now this is doing something different.
Sarah Clarke:Just have someone checking in on me there.
Sarah Clarke:So that, that is, I think, probably fairly well representative of what's happening inside organisations as well, especially smaller organizations.
Jo:Yeah, I mean it is exciting in it, but it's also scary.
Jo:I mean you're, I mean, I say you're the technology governance specialist.
Jo:How do you work out which sort of cases are more risky or sort of more ethically complex?
Sarah Clarke:That's a good question because there is work going on in the field of technology ethics to try and define what a good ethicist looks like, what a good practice of ethics looks like for having conversations about this.
Sarah Clarke:And we've got to respect the fact that there isn't always an awful lot of time when someone's looking to buy something or develop something or change something in a business.
Sarah Clarke:So having a long drawn out process to explore all the potential issues isn't really an option from my point of view.
Sarah Clarke:I discovered from sort of looking after large populations of, say, vendors and changes to try and work out what they need, a deeper dive that you've got some questions you can ask up front to try and channel things that are not really an issue.
Sarah Clarke:It just looks like a great idea.
Sarah Clarke:The opportunities with it just look really simple, straightforward.
Sarah Clarke:Let's suck it in C, let's have a play.
Sarah Clarke:And then there are things on a continuum up to.
Sarah Clarke:We really don't want to plug that into our database and feed it all our data without having thought this through a bit harder.
Sarah Clarke:And then onwards to.
Sarah Clarke:If we're going to make benefits claim decisions or legal and justice system kind of decisions, then it's a whole different kettle of fish.
Sarah Clarke:You can, in not many minutes of questions, channel people to the right kind of specialists and work out what that workload is.
Sarah Clarke:And that's where I focus.
Sarah Clarke:I call it a triage process, a trim process or trim it down to size and that creates a database.
Sarah Clarke:It creates a record of where you've got different things you're planning to do that might hit different thresholds for maybe a legal and regulatory red line.
Sarah Clarke:We want to use personal data and we haven't really thought that through.
Sarah Clarke:From a GDPR point of view, but also from an ethical complexity point of view, you can boil down some indicators that things may become ethically complex early on.
Sarah Clarke:That's certainly not.
Sarah Clarke:Excuse me, that certainly doesn't help anyone to make it too simplistic, but it can act as a gateway conversation to work out.
Sarah Clarke:Do we need to maybe pull someone in to have a deep dive?
Sarah Clarke:Do we need a JO to come in and have a chat to us about what the sort of sideways, upstream, downstream implications of this might be?
Jo:Yeah, because I think, I mean, I find that I'm asking that quite a lot now.
Jo:I keep reading all these articles and, you know, OpenAI is doing this or Sora's been released or something else, and I'm like, and so what, you know, what does it mean?
Jo:And like, where has it got the data from?
Jo:What is it and why?
Jo:Why are we relying on it?
Jo:You know, is it to save time?
Jo:Is it because it's exciting?
Jo:Is it because we need, you know, and there's lots of cases where obviously it is needed and it is helpful, especially, you know, like in helping with medical diagnosis.
Jo:But I almost, I think you still need to have the human there.
Jo:So it's kind of.
Jo:It's all about, is it the data, do you think?
Jo:I know you said you've got your background in data.
Jo:Is that.
Jo:And is that where the issues lie?
Jo:Is that where the bias has come from or the mistakes?
Jo:Or is it because we just don't know if AI is learning from that or, you know, it's a prediction tool, isn't it?
Jo:So is it just.
Jo:Is it predicting things wrongly or how do you work that out?
Sarah Clarke:All big questions, definitely.
Sarah Clarke:We're still trying to bottom out what's fair in terms of the use of training data.
Sarah Clarke:So that's one big issue that's still going on.
Sarah Clarke:We've still got copyright and potentially an evolution of copyright and data protection law to handle what's gone on with the big models ingesting most of the Internet without really asking for.
Sarah Clarke:In most cases, then in terms of actual operation of the more novel types of AI, the generative AI, the chatbots, it's still a ticklish conversation about whether you should be giving access to all your emails or giving it access to your database.
Sarah Clarke:So that gets into technical questions of how have you configured the link to the vendor, say what we call an API, the channel that sucks in allows you to interrogate the model and sucks in the answers.
Sarah Clarke:So you can configure that to either memorize the conversation or not to memorize the conversation.
Sarah Clarke:And there's then training and tuning these things in house, which can involve lots of company confidential as well as personal data.
Sarah Clarke:And those kind of settings and capabilities have been changing almost every week in the last two years for generative AI.
Sarah Clarke:So it's a massive challenge.
Sarah Clarke:I think the old principles still apply for looking at the third parties involved and is it appropriate to share that data with them?
Sarah Clarke:Do we understand how much data and what we're sharing?
Sarah Clarke:But there are big unanswered questions.
Sarah Clarke:One of the people I work with at the World Ethical Data foundation is Carrie Lenning.
Sarah Clarke:She's an excellent specialist in data protection and she's been doing a deep dive on can we forget personal data that shouldn't be in models?
Sarah Clarke:Can we actually stop it spitting that out when you make a query about someone?
Sarah Clarke:Because we've had incidents like a crime reporter who covered a lot of really nasty cases, who ended up being associated with those cases.
Sarah Clarke:So he was being surfaced in data as associated with child abuse and domestic abuse.
Sarah Clarke:And that's because of the way the model puts information together.
Sarah Clarke:So it looks about instances of words or chunks of words, tokens that appear frequently together and are a good match for the likely next piece of a sentence.
Sarah Clarke:So this guy came up in loads and loads of stories on the Internet about his reporting about these things.
Sarah Clarke:And he was looking at how can I get that association fixed?
Sarah Clarke:And we don't really have an answer to that now because the nature of the training of these things, you can't just pluck it out because it isn't actually just a fact or a line item in, in, in a generative AI model.
Sarah Clarke:It's.
Sarah Clarke:It's an association between different words and ideas that it, that it's producing.
Sarah Clarke:So you can, you move to the front end, you move to things like having a final rule that says if this comes out, this name comes out or this fact comes out, don't produce it in results.
Sarah Clarke:So it's like a content moderation and filtering element.
Sarah Clarke:There is a retraining aspect to this where you can just.
Sarah Clarke:Retraining is a massive undertaking.
Sarah Clarke:But you can exclude things if you retrain a model.
Sarah Clarke:You can also tune models to suppress, reduce the prominence of those connections between things.
Sarah Clarke:But none of those things are cheap or easy.
Sarah Clarke:And none of them were really baked into considerations when these things were initially released.
Sarah Clarke:Yeah.
Jo:And there's so many different tools.
Jo:Even if you did that for one, how does that then get picked up by the others?
Jo:I mean, this is when it starts to blow my mind.
Sarah Clarke:Yeah.
Sarah Clarke:The layers of considerations of what you can do to change things if they're not really as they should be, are very, very complex.
Sarah Clarke:And there's a systems thinking kind of perspective on stuff that once things reach a certain level of complexity and if you don't have the means to change them directly, then you have to manage it in terms of output.
Sarah Clarke:So that brings us back to if we can't access the people or the means to change something, if it's not ideal, or if it's not as it should be, then we should decide whether or not it's appropriate for a given usage.
Sarah Clarke:So it's translating for what potentially could happen with a lot of validation activity.
Sarah Clarke:So people will be sitting there looking at a range of different responses for different scenarios, seeing is this a good response, Is this an average response, Is it a bad response, Is it a dangerous response?
Sarah Clarke:Those people, the only people who can really do that work are the people who use it in anger, the people who understand what good looks like.
Sarah Clarke:So I'm all for people playing with these things, but the time when I started to feel around the edges of them is when I used them in areas in which I was really specialist.
Sarah Clarke:Because things like I asked different models to produce a privacy notice, the thing you need to have on your website to say how you're processing data, what people's rights are, etc.
Sarah Clarke:And I looked at it and I thought that's generally acceptable.
Sarah Clarke:But there are about three things in there that make that non compliant.
Sarah Clarke:That would be an issue if a regulator looked at it.
Sarah Clarke:And there's nothing in the rules base that would correct for that.
Sarah Clarke:But it wouldn't have been a massive amount of work for me to take it out of the system and then adjust it.
Sarah Clarke:But I could spot that and I could do that because it's my area of expertise.
Sarah Clarke:So we need to put people in proximity to these tools in things that they really understand very, very well.
Sarah Clarke:And that actually helps to take a little bit of the nervousness and fear off it, because then it starts to give you a measure of the fact we're not dealing with something intelligent, we're not dealing with something that has intent, we're dealing with something that handles a sort of general population view of what you do for a living, unless you've got something designed bespoke for work.
Sarah Clarke:And that's a different conversation of requirements, definition and providing data to help it learn.
Jo:Yeah, Wow.
Jo:I mean, because AI isn't going anywhere, not at the moment anyway.
Jo:And as you say, it's.
Jo:There's so much that needs to be fed into it or to looked at and we need experts.
Jo:So how do you.
Jo:Do you think it's difficult or getting people to use it?
Jo:Like, are you finding it's, you know, hard to encourage people or what do you do to make people less fearful so we can start trusting it, being the expert.
Sarah Clarke:It's always that trade off.
Sarah Clarke:Are we.
Sarah Clarke:Are we teaching people to trust something because of the outcomes we're hoping for, or are we asking people to trust something because it's trustworthy?
Sarah Clarke:So there's two different aspects and you can only really answer the second one when people are familiar enough to have the conversation.
Sarah Clarke:So you can't put the cart before the horse if you are not respecting the complexity of someone's day job.
Sarah Clarke:When I talk about complex systems, I tend to mean the combination of technology, data, processes, people, culture, rather than just technology systems.
Sarah Clarke:If people can see how it might help them, if you can support people to have the language and experience to articulate how it may or may not help them, that's the point at which you can start to foster trust.
Sarah Clarke:If it's a case of we're all going to use ChatGPT now because it's the latest thing and we want to tell our client base that we're doing the latest thing to 10x our productivity.
Sarah Clarke:That doesn't mean that it's going to be easy to integrate or that people will get the value out of it you're hoping for.
Sarah Clarke:So it's not just cosy chats and putting people in a room, it's having some metrics for that as well.
Sarah Clarke:It's implementing clarity with all your stakeholders about what they're trying, what they're hoping to achieve, and creating a language that cuts across all those groups so we can actually have that conversation and probably then feeding into something like a code of ethics.
Sarah Clarke:Because what we've seen in some organisations is that, well, if you look at the situation that's come up with character AI recently being accused of having chatbots that have suggested children harm themselves.
Sarah Clarke:And that was completely predictable because these systems are depending on the tuning and how much controlling you've got, control you've got over the tuning and the configuration.
Sarah Clarke:There's something called A temperature setting, which is how much should I gear this towards speculative conversation and filling gaps in conversation and how much should I just go for?
Sarah Clarke:Absolutely rigid, objective responses.
Sarah Clarke:And that's an average setting for a lot of these retail things.
Sarah Clarke:And what we need to do is learn some lessons for these things and think about the downstream impacts.
Sarah Clarke:A little bit earlier in the process, there's been a lot of creating markets, dominating markets, trying to get into businesses, mopping up later.
Sarah Clarke:And we're trying to front end a.
Jo:Little bit more of.
Sarah Clarke:More of that consideration now.
Sarah Clarke:There's at least one or two court cases on that front.
Sarah Clarke:I just mentioned another one that was heartbreaking recently and it wasn't generative IO, it was historical machine learning and robotics.
Sarah Clarke:There was a company that had a robot called Moxie.
Sarah Clarke:Now Moxie was actually aimed in part at neurodiverse children to help them create bonds and have a friend because of commercial issues with the company.
Sarah Clarke:Moxie is going to stop working.
Sarah Clarke:There's going to be a hard stop and it working and it cost people, cost us the least of it.
Sarah Clarke: lot of cases, or as much as $: Sarah Clarke:There's.
Sarah Clarke:There's a lot of kids who did form genuine attachments with these, with these things.
Sarah Clarke:And the best that can be offered currently is advice on how to talk about the children, about your friend ceasing to work anymore, effectively dying.
Sarah Clarke:And it's trying to integrate some of those considerations, like if you're creating something where it's going to replace a worker or modify a working relationship, it's going to replace some kind of other relationship or modify some other relationship.
Sarah Clarke:Are we considering what happens downstream?
Sarah Clarke:Are we considering the dependence we're creating?
Sarah Clarke:Can we roll back if that turns out not to be a great idea?
Sarah Clarke:Because you may remember there have been cycles of outsourcing and two ping people across to other organisations when new technologies come in, like cloud technologies and outsourcing, IT support and that kind of thing, where there are some decisions made in quite a lot of haste and it gets a bit nibbly trying to find those people and pull them back in, because you can't recruit for that deep local knowledge once you've lost it in a hurry.
Sarah Clarke:So it's a little bit more forethought for some of those things.
Jo:So I'm sort of taking away.
Jo:People need to be upskilling themselves and kind of learning about it, but also the companies need to be valuing that knowledge that the human that their staff have got.
Sarah Clarke:Maybe we haven't done a great job with that recently as well.
Sarah Clarke:I think I get a sense that we're delivering these things into lots of resource constrained systems and lots of systems under a lot of pressure that have been pushed to be very efficient, to operate with as few people as possible.
Sarah Clarke:That doesn't give a lot of head space to consider how to reconfigure operations or work out how to use something new.
Sarah Clarke:We're going to have to be a bit generous with letting people have more time out of their day to work out how to maybe take away collaborative working benefits from these tools.
Sarah Clarke:And it isn't down to every staff member to try and up their own AI literacy.
Sarah Clarke:It's absolutely something that organizations have to take on board as something they're accountable.
Sarah Clarke:It pays dividends.
Sarah Clarke:Obviously you're going to spot gotchas early, you're going to spot good opportunities early, you're going to spot things that you'll be really, really glad you swerved.
Sarah Clarke:But as a second follower or a second tranche adopter, you may sort of yield massively fantastic benefits.
Sarah Clarke:And it is that trade off of different ways to work with this more novel aspect.
Sarah Clarke:We keep forgetting that there's been machine learning and algorithms around for decades.
Sarah Clarke:We're always dealing with search.
Sarah Clarke:We've got preference engines and search engine optimization sitting in the background.
Sarah Clarke:We've got predictive pricing, things like sort of uber search pricing.
Sarah Clarke:That's all AI driven.
Sarah Clarke:We've always had things that decide, you know, should we go to the front of the queue for customer services or not.
Sarah Clarke:We've had rudimentary chatbots for a long while.
Sarah Clarke:We've had Alexa in the house.
Sarah Clarke:I think in some ways it's quite useful to think about what we're trying to do as a souped up Alexa where it's interpreting what you might mean rather than having a very carefully structured voice prompt to do something.
Sarah Clarke:And I think a lot of this can feel like magic, but we need to really shave off some of the marketing and let people dunk themselves in it a little bit.
Sarah Clarke:Give them space and time and support to try it out, ask questions.
Sarah Clarke:Yeah, because it's a bell curve very much.
Sarah Clarke:I was blown away by the capabilities.
Sarah Clarke:And then as I said, I filtered through quite a deep and broad range of things that I considered it might be useful for in my day job in different ways.
Sarah Clarke:And I talked to technical specialists who have much deeper understanding of the architecture than I do.
Sarah Clarke:Because you don't do this Job in isolation, you find good people surround yourself where they'll pick up the phone.
Sarah Clarke:And I have gone in waves of new capabilities coming out and thinking, ah, that's good for that.
Sarah Clarke:But I use about five or six different models for different purposes and I supplies together outputs from them and just keeping track of all that in itself can be quite overwhelming.
Sarah Clarke:And a lot of the time I think I experience something that other people do which is you can produce so much output that is nearly but not quite there.
Sarah Clarke:And is that actually again on just thinking it through and working with other people and producing it with more traditional methods.
Sarah Clarke:So I think we're all going on that journey of discovery right now.
Sarah Clarke:Yeah.
Jo:Do you see any difference?
Jo:I mean, just because we're on women with AI?
Jo:I'm going to ask the question because all of the reports and things I see coming out, it seems to be a lot of men that are at the top of these organizations that are pushing all this out.
Jo:But I'm finding that there seems to be lots of women working and looking at the ethics and the bias and that kind of thing.
Jo:Have you come across any sort of differences like that or do you see different ways that people.
Sarah Clarke:I think I spent as long as I did volunteering and trying to sort of gain, gain the architectural understanding, gain the governance understanding, gain the risk understanding, look at a little bit of the audit potential audit picture.
Sarah Clarke:Because I knew that clients don't just need someone who's knowledgeable, they need someone they have confidence in.
Sarah Clarke:So I needed to get to the stage where I didn't just understand it.
Sarah Clarke:I could simplify concepts to have a conversation and answer the second, third and fourth question.
Sarah Clarke:And I think I know that very few people have the headspace to actually get to the stage of understanding that I've reached through dedicating that time.
Sarah Clarke:And a lot of the challenge is just spotting who's a credible source of information, spotting who knows their stuff.
Sarah Clarke:So there's a double edged sword to this.
Sarah Clarke:I think it's very, very difficult to get up to this speed.
Sarah Clarke:And it's also difficult because people are still struggling to spot who is credible.
Sarah Clarke:We don't know the right questions to ask to work out who a good specialist is.
Sarah Clarke:And that's a really, really uncomfortable place to be.
Sarah Clarke:And I think that there is a separation between women and men in some of these technical fields that's probably represented by what happens with CVs when people look at the technical and procedural requirements that you need for a job profile.
Sarah Clarke:It's well documented that There's a gender difference where men will tend to go, I'll give that a crack.
Sarah Clarke:I'm sure I'd be able to learn it, that'll be fine.
Sarah Clarke:But women will look at the entire set of requirements and often exclude themselves from a candidate pool because they don't meet each and everything.
Sarah Clarke:Now that's partly down to writing absolutely ridiculous job descriptions.
Sarah Clarke:I mean, you've got job descriptions out there saying we'd like people to have five years experience in generative AI.
Sarah Clarke:It's only been used at scale for two years.
Sarah Clarke:You've probably sort of got a cohort of in the hundreds maximum who've been working with this kind of transformer based model for any longer than those two years.
Sarah Clarke:So I think we need to create spaces to be comfortably clueless.
Sarah Clarke:I'm now talking to the people from Women in AI Governance.
Sarah Clarke:You've got Shoshana Rosenberg and Emil Toloo have created that organization to do exactly that, which is to create comfortable spaces for people who are at different stages in their journey with different kinds of specialisms to go, do you know what, I have no idea how to describe this to somebody.
Sarah Clarke:Or I've got this question being asked.
Sarah Clarke:I wouldn't know how to answer this.
Sarah Clarke:Or here's the latest thing on this.
Sarah Clarke:This is worth your time reading.
Sarah Clarke:Those are the conversations that we're trying to make space to have in this kind of translator governance space so we can help other people do that when they're inside businesses.
Sarah Clarke:I have a habit of saying yes to things, so I'm probably going to be the Northern England chapter chair for them and start to have a set of webinars where people can have those kind of conversations.
Sarah Clarke:Hello, producer David here.
Sarah Clarke:Just wanted to let everybody know that we had some technical difficulties during the recording at this point.
Sarah Clarke:So we will join the conversation right out.
Sarah Clarke:After everybody was back online.
Jo:The majority of people that haven't looked into AI or don't know and are just accepting it.
Jo:We need people like you that are looking at the ethics and the governance and the sort of, the trustworthy people that are behind it.
Jo:Because I don't trust all those people that are running these organizations.
Jo:I don't trust all these people that are pushing it out.
Sarah Clarke:It's a massive scramble to see how we can make money out of it.
Sarah Clarke:I mean the people who've made money so far are largely the big chip vendors, the big compute providers, the sort of Amazon Web Services and Google Cloud providers.
Sarah Clarke:The big consultancies have stood up sort of offerings to sort of be the me to businesses, you know, on that principle that you never get sacked for hiring McKinsey because, you know, they're a trusted voice, depends on who you are and what industry is with them is.
Sarah Clarke:But yeah, so that's where the money has mainly gone so far.
Sarah Clarke:And actually some of the most stellar benefits and productivity uplifts are in that kind of management consulting space.
Sarah Clarke:Because what is management consulting, apart from coming up with a convincing representation of what works for you, which is exactly what models do.
Sarah Clarke:So they are going to produce that really great output at speed for those kind of businesses.
Sarah Clarke:So knowledge workers are probably at quite high risk from this.
Sarah Clarke:I feel it.
Sarah Clarke:There are parts of my historical job that other people could do really rapidly with a model, but my value add has always been making sure that it's relevant to you, your organisation, your people.
Sarah Clarke:Right sized.
Sarah Clarke:Because I.
Sarah Clarke:If I just rocked up and said, you need to order everything, you go, yeah, hilarious.
Sarah Clarke:That's funny.
Sarah Clarke:Or if I went into a vapid delivery tech startup and said, we're all now going to sit around a table and talk about ethics, they'd go, yeah, love, jog on.
Jo:We haven't got time for that.
Jo:We're trying to do our jobs.
Sarah Clarke:Or they may have loads of time for that, but they haven't got any levers to pull that.
Sarah Clarke:Relax, actually pause, slow, change things at an exact level, because there's a limited time to establish market position.
Sarah Clarke:So it's understanding that's.
Sarah Clarke:That's kind of jogs into the philosophy and the systems thinking space, which is there's the right thing and then there's the thing you can do right now and there's the foundations for being able to do a better job later, while respecting the fact that you're not there yet, but also charting the journey to bring people along for the ride and respecting the pressure they're under.
Sarah Clarke:Because one of the things that I've always done, I've always prioritised, is working out who has information.
Sarah Clarke:I need to get a clearer picture of what the impacts of this might be and how we need to plan for it, how we need to manage it ongoing and asking the people involved whether they have time in their day and what kind of training they might need and what kind of support they might need.
Sarah Clarke:And often the answer is, what are you doing to our systems now?
Sarah Clarke:How on earth are we going to do anything with this?
Sarah Clarke:I've got 750 things in my queue that I need to get across.
Sarah Clarke:I can't stop and play with this.
Sarah Clarke:And I was like, okay, this is the potential of it.
Sarah Clarke:If you see the potential in it, I need to go to your management team and tell them that you need time made in your day or you need to nominate someone to go, in fact find to bring the information back to the team.
Sarah Clarke:What do you need me to do?
Sarah Clarke:What do you need me to champion for you?
Sarah Clarke:And that wasn't typically how technical functions operated, but that was always how I operated.
Sarah Clarke:And I was trying to bring a lot of those lessons learned to every stage of my career development.
Sarah Clarke:And it's starting to become more common.
Sarah Clarke:But it's also designing things that don't need care and feeding and don't need someone in a room, don't need constant hand holding, holding for people to get a little bit of their own feel for things with some records kept.
Sarah Clarke:And that's where my triage questionnaire comes in.
Sarah Clarke:It's.
Sarah Clarke:It's aiming to be 30 minutes or less.
Sarah Clarke:Unless you hit every red flag and every, you know, legal red line going, then it might be a bit longer.
Sarah Clarke:But generally it's really just all those questions that all the different functions ask.
Sarah Clarke:You'll have data protection come, what you doing with data?
Sarah Clarke:You'll have data science go, right, let's sit down and work out what your data looks like and what we're going to do with it.
Sarah Clarke:You'll have it going, which systems do you need to integrate with?
Sarah Clarke:You'll have procurement going, which suppliers do you need to sign up?
Sarah Clarke:You'll have security going, how's access going to work?
Sarah Clarke:And how are we going to keep our systems secure from this?
Sarah Clarke:And everyone's going, just leave me alone.
Sarah Clarke:I just want to do cool stuff with this music.
Sarah Clarke:And that's how people just walk around the edges.
Sarah Clarke:If everything needs to get done, that's where you get the regulation.
Sarah Clarke:Bad innovation, good equation coming coming up.
Sarah Clarke:So everything I design is to say, okay, all you different teams, you're all going to ask the same kind of housekeeping questions, like what are you planning to do with what kind of systems, when is it going live, all that stuff.
Sarah Clarke:Most of you are going to want to understand what the risk exposure of the data involved is.
Sarah Clarke:So let's ask about that.
Sarah Clarke:Most of you are going to want to understand what kind of systems it needs to link into, because that impacts all kind of things.
Sarah Clarke:So it's finding those questions you can ask early, deduplicating for all the questions that other functions tend to ask.
Sarah Clarke:And actually, for a business with lots of different moving parts, a lot of that will be mature and embedded in more traditional, more established processes for technologies that are better understood.
Sarah Clarke:It might be on a portal with an intake form where everyone's comfortable with answering it and people know how to deal with the outputs to channel work.
Sarah Clarke:But they won't with generative AI, because that's new.
Sarah Clarke:It's still only two out there, two years and people are still deciding whether to use it in Angkor at scale.
Sarah Clarke:And there's also a requirement that's being backfilled for other machine learning and algorithms to go through that same kind of process with the regulation, which you probably a little bit overdue.
Sarah Clarke:So you need to find, find those things and you need to build a business case for training people up or maybe getting a bit of consultancy or maybe buying a wizi tool that will keep all these records for you.
Sarah Clarke:I mean, coming up through when GDPR came in in data protection, there was the battle of the big governance tool vendors like the OneTrusts and the Trust Arcs and established governance tool vendors with new modules.
Sarah Clarke:Now it's the battle of the big AI governance tools and they're essentially all doing the same thing with some small delta for the specific technology.
Sarah Clarke:And that was where I was at with trying to go and do my research is I know this isn't all new, I know good principles will still apply, but I can't feel the edges.
Sarah Clarke:And that was really, really devastatingly uncomfortable place to be because I needed to have a constructive look at that and collect the information to fill in the gaps.
Sarah Clarke:Now I'm there.
Sarah Clarke:I'm there well enough to help other people create a baseline of this stuff makes sense.
Sarah Clarke:This is stuff I've always done.
Sarah Clarke:This is stuff I've always thought about when I'm wanting to use something new.
Sarah Clarke:But what questions do I ask for the brand new bleeding edge stuff?
Sarah Clarke:And that's what I've been distilling.
Sarah Clarke:I want to make a version of my sort of intake questionnaire available to businesses that just they don't have anyone who does state science, they don't have anyone who does security or data protection.
Sarah Clarke:They just want to use the new thing and they don't want to get bitten for having used it.
Sarah Clarke:Or they might be a small business that wants to provide services into a regulated firm, or they might be a school or a small government department that doesn't have the central resourcing and the expertise and the specialists are all sitting there saying you must use this brand new AI thing.
Sarah Clarke:They maybe want to get their own idea of what it means to Them to then have a more credible toe to toe conversation about trade offs with central functions.
Sarah Clarke:That's kind of where I'm going with this.
Sarah Clarke:I've always been quite focused on schools from that perspective because I'm a mum of two.
Sarah Clarke:For a start, I've always offered free consultancy and whatever.
Sarah Clarke:I'm expert into the schools because they are such rich targets for sometimes over enthusiastic vendors who see this great longitudinal data set that they can acquire by getting embedded into school environments and staff are just not equipped to get across that for the positive.
Sarah Clarke:For the positive and for the things that should have another question asked people.
Jo:People who want to, to take advantage of that.
Jo:People that need a.
Jo:You or that need your.
Jo:Your triage.
Sarah Clarke:How can people kind of people.
Jo:Yeah, yeah.
Jo:How can, what's the best way for people to sort of go about that?
Jo:Like how can people find you or what are the sort of places that you recommend people go to to find out more information?
Sarah Clarke:Well, the main place can find me, yeah, is on LinkedIn and actually one of our biggest challenges, especially when we've got great shifts in the technology and solutions space, is who bubbles up as supportive and honest in this space.
Sarah Clarke:We all have communities of people who we trust, who we would work with in a second.
Sarah Clarke:I've mentioned schools now.
Sarah Clarke:I would always recommend people talk to a lady called Claire Archibald.
Sarah Clarke:She's just joined a new law firm as a job.
Sarah Clarke:She's been amazing, absolutely incredible at supporting schools, meeting them where they are to do a lot of the required governance.
Sarah Clarke:I have a whole other list of people, people are always welcome to reach out to me and I'll point them to folk that I trust if people want to reach me.
Sarah Clarke:The best place to reach me at the moment is LinkedIn or I'm happy to share contact details with you after the show, but I'll put a link.
Sarah Clarke:We've always talked about how we put good people in touch with people in need and we're still developing that for this tranche of expertise.
Jo:Well, I think it's.
Jo:Yeah, you're doing a great job and you've left me feeling inspired.
Jo:Well, with more questions to ask but more to think about, but in a good way.
Jo:So excellent.
Sarah Clarke:That's the aim.
Sarah Clarke:And you know, obviously we've, we've thrown a lot.
Sarah Clarke:I've thrown a lot at you, you've asked a lot of questions which have been brilliant questions.
Sarah Clarke:So thank you.
Sarah Clarke:Thank you, Jo.
Sarah Clarke:I will talk about this until someone stops me.
Sarah Clarke:So what I, what I need to have happen is for people to come back to me with questions.
Sarah Clarke:If I haven't made something understandable, it's my fault.
Sarah Clarke:You are not expected to be expert enough to pick out what I've been up to my eyebrows in for the last four years.
Sarah Clarke:So never worry about asking more questions.
Jo:I love that.
Jo:I love that.
Sarah Clarke:So.
Jo:Well, I probably will come back to you with more questions.
Jo:And yeah, if anyone listening has got questions, Sarah's link, LinkedIn link will be in the links.
Jo:So get in touch or get in touch with me.
Jo:And that just leaves me to say, Sarah Clarke, thank you for coming on Women with AI.
Sarah Clarke:Thank you for having me, Jo, it's been lovely.