Artwork for podcast All Things Human with Adele Wang
The "Nobody in Charge" Problem: Why AI Governance Matters to You
3rd February 2026 • All Things Human with Adele Wang • Adele Wang
00:00:00 00:15:26

Share Episode

Shownotes

Have you ever had an automated system deny your loan, flag your social media post, or change your work schedule without explanation?

In this episode, we dive into the "talk of the town" in tech circles: AI Governance. What is AI Governance?. While it sounds like a dry topic for lawyers, it’s actually a deeply human issue that impacts our daily lives and livelihoods.

We explore the growing "accountability gap" where decisions are made instantly and at scale, yet no single human feels responsible for the outcome. From the workplace to customer support, we break down why the current lack of clear ownership is eroding trust—even when the technology is working exactly as designed.

Key topics covered in this episode:

  1. The Black Box Effect: Why "the system flagged it" is becoming the ultimate shield against accountability.
  2. Human Speed vs. AI Scale: How the transition from manual audits to instant, automated decisions has broken our traditional feedback loops.
  3. Ethics vs. Accountability: Why having an "ethics team" isn't the same as having someone empowered to stop or reverse a system.
  4. The Future of Trust: What leaders must do to bridge the gap between human responsibility and machine efficiency.

Join us as we pull back the curtain on AI governance and discuss why the most important question in tech today isn't "What can AI do?" but "Who is responsible when it does it?".

#AI #ArtificialIntelligence #TechTrends #AIGovernance #EthicalAI #DigitalTrust #FutureOfWork

Transcripts

Speaker:

Lately, I've noticed something interesting.

Speaker:

People in the AI space keep talking about the need for AI governance, and

Speaker:

I get why that phrase doesn't land.

Speaker:

If you've never heard about it, don't worry about it.

Speaker:

But it is the talk of the town amongst people who are working

Speaker:

with ai, and it sounds abstract and bureaucratic, like something

Speaker:

meant for lawyers or tech committee.

Speaker:

But here's the thing, this is not a technical issue.

Speaker:

It's a human one.

Speaker:

So if you just hang in here, I think you'll be able to feel how this issue

Speaker:

impacts you, because the truth is most of us are already feeling it, even if

Speaker:

we don't have a language for it yet.

Speaker:

And in my substack, I go more into how and why this is happening.

Speaker:

But first, let me just translate what AI governance actually means.

Speaker:

At its simplest is just a conversation about one question.

Speaker:

When AI systems make decisions that affect people who is actually responsible?

Speaker:

That's it.

Speaker:

That's the whole thing, not the ethics frameworks, not the policies, not the

Speaker:

white papers, just who owns the outcome when something goes wrong because of ai.

Speaker:

Let me give you a real life example.

Speaker:

Say you apply for a loan or a job, or maybe your health insurance

Speaker:

suddenly denies coverage.

Speaker:

You get an automated message after careful review.

Speaker:

We're unable to approve your request at this time.

Speaker:

Well, you wanna know what happened.

Speaker:

You call what?

Speaker:

What happened?

Speaker:

Why was I denied?

Speaker:

And no one can give you an explanation that you can understand.

Speaker:

There's no human, there's no one you can talk to who actually made the decision.

Speaker:

There's a sense of, well, it went into a black box and the algorithm decide.

Speaker:

So even though you call customer support, the person on the phone, what happens?

Speaker:

They say, I'm sorry, that decision was made by the system.

Speaker:

Not me, not my manager, not even the company really.

Speaker:

The system, the ai, and this is where the problem starts to show up.

Speaker:

You might ask, okay, but who designed the system?

Speaker:

And the answer is, well, that was a different team in marketing it, whatever.

Speaker:

So you ask, well, who decided to use it?

Speaker:

And you might get an answer that says something like leadership.

Speaker:

You might ask, well, who checks to see if the information on me is fair or accurate?

Speaker:

Now, at this point, the rep probably has no idea, but if they

Speaker:

did, they might direct you to compliance, another department.

Speaker:

Then you might ask, well, who can I talk to to override this?

Speaker:

This is incorrect.

Speaker:

And the rep might just, after a long pause, say, I'm not sure

Speaker:

anyone can, so now you're stuck.

Speaker:

Nothing illegal may have happened, nothing malicious may have happened.

Speaker:

But something deeply unsettling has, and you can feel it, even if you can't name

Speaker:

it, who was responsible for why my loan was denied, or why my job offer, whatever.

Speaker:

And no one can give you a clear answer because , the responsibility

Speaker:

and the accountabilities now shared across five different departments

Speaker:

with no one group in charge.

Speaker:

Was it compliance?

Speaker:

Was it legal?

Speaker:

Was it it?

Speaker:

Was it marketing?

Speaker:

Was it a specific person?

Speaker:

Was it who designed the AI algorithm?

Speaker:

It may have been a little bit of all of them, but if that's

Speaker:

the case, what do you do?

Speaker:

Let me give you another example.

Speaker:

This time from inside the workplace.

Speaker:

Imagine that you work somewhere that uses software to track productivity

Speaker:

or schedule shifts or performance.

Speaker:

On paper, it's supposed to be objective, right?

Speaker:

It's supposed to be efficient and fair, but one day something changes, your

Speaker:

schedule suddenly gets worse, or you're assigned fewer hours, or for some reason

Speaker:

you're flagged as underperforming.

Speaker:

You didn't get a warning.

Speaker:

No manager pulled you aside.

Speaker:

There was no conversation.

Speaker:

So you ask your supervisor what happened and they say the system flagged it.

Speaker:

Now watch as accountability dissolves again, you ask, what did the system flag?

Speaker:

And the response you get is, we don't really see the details.

Speaker:

And then you ask, well, can it be wrong?

Speaker:

'cause I think it's wrong.

Speaker:

And you get an answer of.

Speaker:

Possibly.

Speaker:

And so you're asking, well, can you override it?

Speaker:

And maybe you get this long pause and you get an answer of,

Speaker:

not really, or I don't know how.

Speaker:

So the question becomes, alright, who's responsible?

Speaker:

Especially if it's wrong, right?

Speaker:

HR didn't design it, your manager didn't choose the AI

Speaker:

model, they're just using it.

Speaker:

It just implemented what they were told to buy.

Speaker:

Leadership approved it because it was the industry standard, so they did

Speaker:

their due diligence, but now suddenly your livelihood is affected without a

Speaker:

single human owning the whole decision.

Speaker:

And this is what is where people start to feel quietly anxious at work.

Speaker:

Not because a machine exists, but because there's no clear human to talk to anymore.

Speaker:

You can't negotiate with the ai.

Speaker:

You can't explain context to it in terms of maybe something happened in

Speaker:

your life, you can't appeal to the judgment, or is there a way to do this?

Speaker:

What's the process?

Speaker:

Who do you talk to?

Speaker:

Your boss may not even know.

Speaker:

And so when something goes wrong, everyone involved can honestly

Speaker:

say, this wasn't really my call.

Speaker:

That's the problem.

Speaker:

Here's another example that you can probably relate to, and it's

Speaker:

social media content moderation.

Speaker:

Think about this.

Speaker:

You create a post and it disappears and your account is flagged, or one of your

Speaker:

videos is taken down and you get a message saying it violated community guidelines.

Speaker:

Well, which guideline?

Speaker:

Which interpretation?

Speaker:

Which human decided that my post violated community guidelines.

Speaker:

So you appeal , you follow whatever their appeal process is.

Speaker:

Because most of these platforms have something and you get another automated

Speaker:

response, and it doesn't say anything.

Speaker:

There's no, there's no villain, there's no malice.

Speaker:

It's just a system acting without a face.

Speaker:

So in all of these examples, the issue isn't technology.

Speaker:

It's that decisions with real human impact are being made without a

Speaker:

clear, accountable human in charge.

Speaker:

That's what people feel.

Speaker:

That's why trust erodes, even when everything is working.

Speaker:

What people feel in these moments isn't just frustration.

Speaker:

It's a loss of agency.

Speaker:

It's a sense that decisions about lives are being made somewhere

Speaker:

else by nobody in particular.

Speaker:

And here's the key part, as I've said, when no one clearly owns a decision,

Speaker:

the system feels unaccountable.

Speaker:

Even if everyone involved is well intentioned, that's the

Speaker:

human experience of this problem.

Speaker:

Now, you might ask Adele, why didn't this used to be a big deal before ai?

Speaker:

People have always been denied things.

Speaker:

Well, in the past, even when organizations were messy, there were natural limits.

Speaker:

Things moved slower.

Speaker:

Meaning decisions took time.

Speaker:

Maybe they sat in a queue and they got to someone's desk and humans were

Speaker:

involved and mistakes stayed local.

Speaker:

That there was often perhaps even an audit trail you could track when things went

Speaker:

from this person to that group, to this department, and you could do a forensic

Speaker:

, and compile a sense of what happened.

Speaker:

AI changes all that because now decisions can happen instantly at scale.

Speaker:

Automatically.

Speaker:

They can quietly reshape outcomes for thousands, or

Speaker:

millions of people all at once.

Speaker:

And so now when something breaks, it breaks fast without a clear owner.

Speaker:

This is why the conversation around AI governance is so hot right

Speaker:

now and why I've been in so many conversations around this because it's

Speaker:

a lot of pulling apart what needs to happen to establish the right level

Speaker:

of, accountability and ownership.

Speaker:

I can't tell you how complex these conversations can get, and

Speaker:

I'm all for it because I think this is where we need to go.

Speaker:

And I'm talking to leaders who sometimes haven't even thought this

Speaker:

far yet, and they're very glad that we are having these conversations.

Speaker:

And sometimes people will say, well, we have ethics.

Speaker:

You know, we, we've got someone in charge of our ethics that's supposed

Speaker:

to keep an eye out for us on this.

Speaker:

Well, here's why the Ethics Talk alone hasn't fixed this.

Speaker:

'cause I've been curious.

Speaker:

There have been people who have paid a lot of money ethics.

Speaker:

Experts who advise companies on ethics frameworks and the legalities.

Speaker:

It's all very important.

Speaker:

Very good stuff, but it can't be confused.

Speaker:

Yes, there are ethics teams and people are thinking about

Speaker:

this, but here's the difference.

Speaker:

Ethics helps us talk about what should matter.

Speaker:

Which is really important.

Speaker:

Accountability is about who actually has the power to stop,

Speaker:

change or reverse a system.

Speaker:

And most ethics teams, they don't control whether a system

Speaker:

launches or doesn't launch.

Speaker:

They don't have the authority to slow it down.

Speaker:

They also don't own the results.

Speaker:

When a system has caused harm, they don't own the consequence.

Speaker:

Their job is to raise concerns.

Speaker:

They can raise, they can offer guidance, but they don't usually have

Speaker:

the authority to say, this stops here.

Speaker:

So this is the quiet truth most people sense right now that

Speaker:

we're living inside right now.

Speaker:

This interesting strange gap where systems are making more decisions faster

Speaker:

than ever before, but responsibility is becoming harder to locate because

Speaker:

AI's making a lot of decisions.

Speaker:

So.

Speaker:

If the AI did that, who was actually responsible, and people feel that.

Speaker:

So even if you've never heard of the term AI governance in your life, this is

Speaker:

something that's going to impact everyone.

Speaker:

You can feel it.

Speaker:

For example, if something happens and no one can explain to you why

Speaker:

something happened, or maybe you're.

Speaker:

Wanting to appeal something and you appeal and it goes nowhere.

Speaker:

You never hear back, or maybe someone tells you that they don't have the

Speaker:

authority to override an automated outcome, that it's out of their hands.

Speaker:

That accountability dissolves into this process.

Speaker:

The accountability and responsibility is diluted.

Speaker:

So for the leaders listening, and I know some of you are, this is not about

Speaker:

blame, it's about structural mismatch.

Speaker:

Our organizations were built for a world where decisions moved at human speed.

Speaker:

But AI systems don't.

Speaker:

Until someone is clearly empowered to own AI driven decisions and not just

Speaker:

advise on them, or not just document on them, but take responsibility for them

Speaker:

that someone's job is responsible, this feeling of instability will keep growing.

Speaker:

So for everyday people.

Speaker:

Just know that when systems act faster than humans can respond

Speaker:

and no one's clearly accountable.

Speaker:

This is why trust erodes, even if no one meant harm, it's not like someone was

Speaker:

maliciously trying to do something to you, and I think that's the frustration.

Speaker:

It used to be you could go after somebody or confront them, but you can't now.

Speaker:

This is what people are experiencing.

Speaker:

It's not so much I'm scared of the AI, or I'm confused about the

Speaker:

technology, but there's a sense that no one is really in charge anymore.

Speaker:

So in closing, it's not that I have a solution to this, but I'm in deep

Speaker:

conversations around this and I have some thoughts that I'm gonna share

Speaker:

on my substack because otherwise this would be way too long of what's needed.

Speaker:

I don't think most people want less technology.

Speaker:

I think they wanna know who is responsible when technology makes

Speaker:

decisions about their lives.

Speaker:

That question is simple, and the fact that it's so hard to answer

Speaker:

right now that tells us something important about the moment we're in.

Speaker:

Thanks for listening.

Speaker:

If this has been useful to you, let me know.

Speaker:

I'd love your comments.

Speaker:

It tells me what more you want me to delve into, share this with

Speaker:

someone who might find it enjoyable, and tune into my substack for

Speaker:

more in depth around this issue.

Speaker:

Thanks.

Chapters

Video

More from YouTube