Artwork for podcast Cross-Examined
AI and the Legal System: Bias, transparency and ethics
Episode 317th March 2026 • Cross-Examined • The Law Institute of Victoria
00:00:00 00:22:37

Share Episode

Shownotes

This episode examines how artificial intelligence is reshaping legal practice and the broader justice system. University of Melbourne Professor Jeannie Marie Paterson explores both the promise and the pitfalls – highlighting how AI can boost efficiency gains while raising critical questions about governance, transparency and the need for sustained human oversight. Drawing on real-word failures, she explains how opaque systems can embed bias and produce hallucinations that undermine legal ethics. The conversation also considers how regulation, professional responsibility and improved AI design can ethically enhance the legal system.

Guest:

  1. Jeannie Marie Paterson, Professor of Consumer Protection and Technology Law at the University of Melbourne and a Fellow of the Australian Academy of Law
  2. Co-founding Director of the Centre for Artificial Intelligence and Digital Ethics (CAIDE)
  3. https://law.unimelb.edu.au/about/staff/jeannie-paterson | https://www.linkedin.com/in/jeannie-marie-paterson-225b4a33

Host:

  1. Karen Finch, Head of Legal Policy and Innovation, Law Institute of Victoria
  2. kfinch@liv.asn.au | www.linkedin.com/in/karen-finch

Episode Overview

This episode explores how artificial intelligence (AI) is reshaping legal practice and the broader justice system. University of Melbourne Law Professor Jeannie Marie Paterson discusses both the promise and the pitfalls of AI, highlighting efficiency gains in tasks like document review and research, while stressing the need for strong governance, transparency and ongoing human oversight.

The conversation examines real-world failures such as Robodebt and COMPAS, illustrating how opaque systems can embed bias and undermine legal ethics. Jeannie also addresses the growing issue of AI hallucinations, which can produce convincing but false legal information, reinforcing the importance of rigorous verification by lawyers.

The episode considers how regulation, professional responsibility and improved AI design can support more responsible use of technology, as well as whether AI can improve access to justice and ethically enhance the legal system.

Topics & Timestamps

  1. [00:24] Intro and guest welcome
  2. [01:38] How AI is currently helping lawyers
  3. [03:42] Benefits and challenges of predictive AI
  4. [04:46] Lessons from the Robodebt and COMPAS scandals
  5. [07:15] The issue of hallucinations
  6. [10:38] The importance of accuracy and transparency
  7. [13:14] Regulation challenges
  8. [15:14] Can AI improve access to justice?
  9. [17:56] What separates lawyers’ skills from AI?
  10. [20:30] What the future of AI and the law looks like

Key Takeaways:

  1. The opportunities for AI in law are almost unlimited, but most of the profession’s attention is currently on tools that do document and low-level diagnostic work.
  2. Predictive AI in law is useful but raises a lot of risk because it can amplify existing historic biases.
  3. An algorithm is only as good as the data it is trained on, so lawyers need to be wary of outcomes being hallucinations. They may sound legitimate but have no factual basis.
  4. It is challenging to consider regulating AI while the profession is still working out how to use it.
  5. AI could assist lawyers to broaden the scope of the services they provide.
  6. Whatever the future of AI use in the legal system, it must be used in a way that is ethical, responsible and transparent.

Resources & Links:

  1. LIV Artificial Intelligence Hub –essential and up-to-date AI resources for Victorian practitioners
  2. Ethical and Responsible Use of Artificial Intelligence – LIV AI Ethics Guidelines
  3. Supervising AI” – LIJ article by the Legal Practitioner’s Liability Committee
  4. AI and democracy” – LIJ article by The Honourable Justice Melissa Perry
  5. Centre for AI and Digital Ethics (CIADE)
  6. Lessons from Robodebt” – LIJ article by Matthew Munro and Nidal Sayegh
  7. AI Hallucination Cases Database
  8. Pro bono: Simple high tech” – LIJ article on the Justice Connect triage tool

For the latest insights on Victorian legal developments and to hear directly from leading voices in the profession, subscribe to Cross-Examined on Apple Podcasts, Spotify or visit the Law Institute of Victoria website.

  1. Follow us on LinkedIn for legal insights and episode updates.
  2. Enjoyed the episode? Leave a rating to help other legal practitioners find and benefit from the series.

About This Podcast:

Cross-Examined is a new podcast from the Law Institute of Victoria. Tune in to hear experts discuss hot topics in the law and the changes shaping the legal profession. Regular episodes will cover everything from AI and cyber threats to ethical dilemmas, workplace taboos and practice management insights.

This podcast is recorded on the traditional lands of the Wurundjeri people of the Kulin Nation. The Law Institute of Victoria acknowledges the Traditional Custodians of Country across Australia. We pay our respects to Elders past and present.

Disclaimer:

This podcast is for informational purposes only and is not intended to replace professional legal advice. The views expressed in this podcast do not necessarily reflect the views of the Law Institute of Victoria (LIV). The LIV is not responsible for any losses, damages or liabilities that may arise from the use of this podcast. Listeners should seek independent legal advice for their matters.

Production Information:

  1. Produced by: The Law Institute of Victoria
  2. Producer and audio editor: Garreth Hanley
  3. Music: Garreth Hanley
  4. Copy and show notes: Louise Surette

Connect With Us:

  1. 📧 Email: podcasts@liv.asn.au
  2. 🌐 Website: www.liv.asn.au
  3. 🔗 LinkedIn: www.linkedin.com/company/law-institute-of-victoria
  4. 📱 Apple Podcasts: Cross-Examined - Podcast - Apple Podcast
  5. 🎵 Spotify: Cross-Examined | Podcast on Spotify

Transcripts

Garreth Hanley:

Welcome to Cross-Examined, a podcast by the Law Institute of Victoria.

Karen Finch:

Welcome to Cross-Examined. My name is Karen Finch. I'm the head of Legal Policy and Innovation at the Law Institute of Victoria.

Today, we're exploring some important questions facing our courts and legal systems.

What happens when we hand decision-making power to algorithms that might carry invisible biases? What does fairness look like when it's built into code? How do we hold systems accountable when we may not fully understand how they work? And what does justice mean when the decision maker is a black box?

To help us untangle these questions, we're very lucky to have Professor Jeannie Marie Paterson with us today. Jeannie is a Professor of Law at the University of Melbourne and a Co-Founding Director of CAIDE, or the Centre for Artificial Intelligence and Digital Ethics for those not familiar with the acronym.

Jeannie's research informs courts and government inquiries, and she collaborates with technologists and health experts on responsible AI for high-risk settings. Jeannie is also a Fellow of the Australian Academy of Law, and her work sits at the intersection of consumer protection, AI regulation, bias and liability.

Jeannie, welcome to Cross-Examined.

Jeannie Paterson:

I'm delighted to be here.

Karen Finch:

And we are so delighted to have you. So, let's get into it. We've got lots of questions to get through. Let's start with the opportunity. What's the value proposition of AI in legal work and where do you see it making the biggest practical difference for lawyers and their clients?

Jeannie Paterson:

Well, the opportunities for AI in law are almost unlimited, and I think we're still seeing the creative imagination of technologists and lawyers of what it can do. But at the moment, I think probably most attention is on the tools that do document work or low-level diagnostic work, and that sounds boring, but it's tremendously time-saving for lawyers, provided they use the technology properly.

So, tools can interrogate documents, they can draft emails, they can summarise long policy documents that have been produced by governments, they can summarise cases and they can do preliminary research. And before the audience shrieks and goes, ‘Don't rely on it!’, I said low-level diagnostic work. These AI tools are a great beginning for the value-added intellectual work that lawyers do and is at core of their function, which is applying analytical skills, judgment, knowledge to those raw materials that they can use AI to bring together.

Karen Finch:

Love that. And so let's now go down the journey. Walking through what happens when legal decision-making systems operate without sufficient oversight or clarity about how they're working.

Jeannie Paterson:

Well, we're now moving to the horror scenario, because we can use AI to help lawyers do their legal work, we can use it for low-level diagnostics, but it's also possible to be using AI tools in other ways to make predictions. And predictive AI, which is trained to find patterns in data and make predictions about future occurrences or patterns or predilections, is really useful in all sorts of areas.

That's how a lot of medical-AI works, by being trained to make predictions about particular progression of diseases or what the reading on a scan might mean. We can use that in law as well. We can use that for predictions about recidivism, we can use it for predictions about appropriate bail, we can use it for predictions about how courts perform, we can use it for predictions about likely outcomes of cases. But, when using predictive AI in legal context raises a lot of risks, because what we know about predictive AI is it can get it wrong, it can amplify existing historical biases, and it's really hard to interrogate.

And there is the error hallucination problem when we're talking about in generative AI. So, AI comes with a number of risks, a bundle of risks, and it's really important for anybody using the technology to understand what the risks are, so they can put in place the precautions to alleviate risks and in fact make the decision about where AI is most useful for them.

Karen Finch:

And we've seen in the past some of those big things about the predictive technologies with say Compass and Robodebt, and now we're seeing it move into those hallucinations. So, what can we learn from the Compass situation or the Robodebt or now where we are now with those hallucinations?

Jeannie Paterson:

So, just to go back and explain the Robodebt and Compass examples, I'm sure most listeners will know, but Robodebt was the use of an algorithm to compare tax returns and Centrelink payments with the aim of identifying who might be defrauding the system. And the algorithm was a pretty straightforward algorithm. It was comparing two databases to find discrepancies. The algorithm was premised on a mistaken understanding of the law, and therefore people would ask to repay sums that weren't in fact owing.

Compass is a US example. The Compass system is an algorithm that's used to predict recidivism and used for the purposes of bail and in fact some sentencing in some US states. The algorithm itself is based on a number of actuarial factors about who's likely to re-offend based on their social background, their life experience.

The problem with the Compass algorithm is that the consequence of that algorithm is it tends to suggest that black offenders are more likely to re-offend. They're more high risk, so the consequences fall more heavily on black offenders, which isn't acceptable in modern society and is a reflection of historical policing patterns where police were more heavily found in black neighbourhoods and other factors in the US.

But the problem with both of those systems is those algorithms, particularly the Compass algorithm, people don't really know what factors are in there. They don't know how they work. It's not, in the word that we use in AI governance or AI ethics, it's not ‘transparent’ and if something's not transparent it's very hard to interrogate it, it's very hard to scrutinise its use, it's very hard to make decisions about appropriate use and it's very hard to govern.

Karen Finch:

And so just touching on and delving a little bit into that transparency, I think that now that we've got these generative AI systems that are creating these hallucinations, and I know you've got some figures that we can look at, what do you think that the lawyer out there that's listening to this needs to actually understand about the transparency in how these systems are trained and designed?

Jeannie Paterson:

Yeah, so lawyers who will be listening go that's okay, I'm not using an algorithm to predict a sentence or decide whether someone is defrauding the government, I'm using or I'd like to use AI to review documents or help me with preparation for a case I'm running or draft my emails, what could go wrong there?

Well, it's kind of the same issue in the sense that the algorithm is only as good as the data it's trained on and most of the legal tech tools are still premised on generative AI such as OpenAI and then they've been refined for legal work, which means they're better at legal language and they're often using a retrieval method where their answers are being drawn from legal documents, but they're still just based on statistics and therefore errors can happen.

So hallucinations, hallucinations mean it's an output – if you're doing research, you'll give an information about the law or a case that looks realistic but is completely false or looks realistic and the citations are false. And a lot of people will go well that's not a hallucination because a hallucination suggests a mental state. An algorithm has no mental states, it’s just an error. The way it is a hallucination is that generative AI works by having learnt the relationship between different kinds of words so that it can produce an output that sounds realistic. It's producing a kind of word salad and usually that makes sense but sometimes it doesn't, and that's the sense in which hallucinations are hallucinations. It's a word salad, it sounds reasonable, or it kind of sounds reasonable, but it may have no factual basis.

Now, the AI is getting better, hallucinations are being reduced by things like the use of RAGs, retrieval augmented generation, most of the legal tools that we’re being given will tell you they've reduced the prevalence of hallucination, but they simply can't get rid of it all together, because that's the way in which generative AI works is – by throwing words together, so hallucinations are part and parcel of that, or errors, if you don't like that term.

There is a database that you can look at that lists the number of cases that have gone to courts across the world where the court has identified hallucinated references, erroneous references. That number is now at 853 cases that have been identified by courts.

Now, what we don't know, Karen, as you mentioned to me earlier, is whether that's the tip of the iceberg or if it's in fact if that's a small proportion of all the cases where AI is used or if it's a large proportion of all the cases that AI has been used – we don't know. And why don't we know? Because there's very little data on the effectiveness of these tools and we still haven't picked up all the cases where things have gone wrong. So, it's a bit of a scary figure in that context.

Karen Finch:

It certainly is, and I think that's the really important thing to emphasise there, given what you've just shared with us, is ensuring if you're a lawyer out there listening go and interrogate this source – don't take it at face value. I've been listening to a podcast recently that was talking a lot about generative AI wants to do the right thing by the person that's putting in. They really want to please us as humans and often they will give us the answer they think that we want. So, if you're looking for a case and you really want that case to help your client then they're likely to give you a case whether it's real or not. So really understanding that interrogation and, like you say, making sure that you're going back to the resources, not taking it on the face value.

Jeannie Paterson:

So that's a really interesting point because AI is also generative AI is also sycophantic. It's trying to give the result that we want. It's kind of bias actually, it's biased towards predicting the output that we actually want as opposed to the output that may be inconvenient. And, in that context, we have to check, as lawyers – accuracy is particularly important for lawyers – so we need to check the outputs of the generative AI.

So, then the question is raised of well, why is it useful? Is there any value in the use of those tools? And I think that what's happening is, we're seeing the development of the legal tools that are available on the market can't remove the risk of hallucination altogether the risk of error altogether, but they're making it easier or more amenable for people to check.

So now if you use a even a free-to-air tool it'll give you little references down the side of the propositions that you can click on and check the source, and if you're using a specifically designed legal tech tool it will give you little references down the side that you can click on and check the output of the AI. So, the value proposition is that the lawyer still needs to check, but that speed of checking is speeded up because it can be done quite easily.

That's a form of transparency, which is saying this is how the results were produced but that moves the challenge from lawyers from understanding that they need to check to dedicating the time to checking. Because if we have 853 cases occurring that are identified by courts, I used to think that those cases were people who didn't understand the technology, so they just made a mistake they didn't understand about hallucinations, but there's been so much press about hallucinations in legal cases, I think a lawyer would have to be living under a rock not to know this problem. So now I think it may be that people are too strapped for time, they know they should check but they're not taking the time to check, and they just need to do it. That needs to be part and parcel of the legal duty of care and skill, is that we have a system and process for ensuring that any outputs from AI are scrutinised in the appropriate way.

Karen Finch:

So moving on to the regulatory or the governance changes that you think the legal profession may need to make to ensure AI is being used safely in legal decision making. There's been a lot of promise, and I'm not sure about the delivery of whether it's going to improve access to justice, but it seems like bias has been seen in the context of affecting vulnerable people. What's your view on regulatory or governance changes?

Jeannie Paterson:

It's difficult to think about how we regulate or govern this technology when the legal profession is still working on how to use it. Because I've said a couple of times, at the moment AI and the law in Australia is being used, if at all, for sort of administrative back-office purposes – perhaps organising documents or keeping client records or diagnostic work, so this sort of drafting summarising stuff I've spoken about.

How do we regulate a use that's moving? How do we regulate that sort of use? Is there a potential for AI to be used more actively? And the core work of lawyers is, you've just mentioned, decision making. So, decision making by lawyers about how they pursue clients’ cases, or courts in decision making. There's this sort of constant pressure for the AI to move closer to the core function of lawyers and judges and barristers.

We're still deciding where the safe space is and what it means for us, so it's hard to think about a regulatory system except that lawyers are already subject to ethical and legal obligations, and those ethical and legal obligations are expressed as principles – principles such as use care and skill in doing our legal work, the duty to the court, the duty to the client, avoiding conflicts. And I think actually all of those ethical and legal duties can help lawyers navigate this new evolving really quickly-changing world of AI, but they've got to remind themselves of what those duties require and constantly be reevaluating their use of the technologies in the light of those duties. And they can only do that if they have a high level of understanding about the technologies and there is transparency about the use of the technology. So it still comes back to those core values I think at this point in time.

Karen Finch:

And your view on access to justice?

Jeannie Paterson:

There's been so much rhetoric about AI improving access to justice and the examples of where that's done are fairly narrow.

So, for example there's the ‘do not pay’ bot which would produce, and does produce, letters for people who are subject to parking or other fines to help them explain why they got the fine or why they shouldn't have the fine, but that's a really simple system, that's not a particularly sophisticated AI. It's a simple old ‘here's the problem, here's the letter’ system.

We've got the Justice Connect triage tool, which is helping to understand the problems people that approach Justice Connect looking for legal support and advice and translate their problem into legal language, so that the issue can be referred appropriately. But beyond that, we haven't really seen a lot of movement.

A lot of unrepresented litigants are using AI, free legal AI tools so like ChatGPT or Gemini or Claude, to produce their pleadings for court, and what we're hearing is that those pleadings have become very long, they've become word salad in fact, which is creating problems for courts in how to manage those. But I guess we shouldn't think that's surprising, because if the choice is use AI to help you navigate court or nothing, then obviously the choice of the AI from the perspective of the unrepresented litigant, is a good choice and is going to – from their perspective – help them. So that's a real question for courts about how they manage the unrepresented litigant with long word salad submissions.

But it's also a challenge for lawyers to really get behind the rhetoric about ‘well, AI can help access to justice’ by thinking about the kinds of tools that they might be able to be designed that help people navigate court. And there's no reason why instead of using a general AI tool we could not start to make available to litigants a legal AI that perhaps doesn't write all of their submission but helps them navigate that court process. We've been promising that for a long time, but it just hasn't quite come to fruition and it probably requires a really genuine and deep conversation between technologists, courts and lawyers about how, even in small ways, the technology can assist people to solve their legal problems.

Karen Finch:

That's right and, the elephant in the room is could potentially lawyers be fixing their costs because they're using these generative AI tools and maybe then self-represented litigants might have a third choice, which could be going to lawyers and actually getting lawyers because they'll be a lower price point.

Jeannie Paterson:

I think that's absolutely right, and if lawyers really do believe that they are different from an AI, that lawyers bring something to legal decision making that's not possible through AI at this point in time – which is what lawyers say, they say well we've got judgment and experience and expertise and intuition and the like. If that is really true that lawyers do bring something, then lawyers should be looking to make those special skills available to the widest possible category of people, and they should be thinking about how they can add value to what they do using AI, to broaden the scope of the services that they provide. So, I think that's absolutely right, and actually I know that is a particular passion of yours Karen, and I'm looking forward to seeing movement on that particular topic.

Karen Finch:

Me too Jeannie, me too. So, this leads us on very nicely to my next question, which is, what are you most optimistic about and what do you think is working and what is needed to use AI more ethically. What's your most optimistic approach?

Jeannie Paterson:

Well, I'm pretty optimistic about AI generally in the sense that it's a technology that is being offered to us that I think can help us in our day-to-day work, as you say, perhaps even to bring legal costs down, and quite frankly which we can't avoid. And so therefore we kind of need to jump in and use it creatively and see what can be done for the betterment of society.

If you look across to the medical field, there's lots of examples where AI is starting to make real changes for the better health of the community in terms of early diagnosis of disease. People have different views about but AI scribes helping doctors manage their administrative processes in a way that's more effective.

I think the main thing for me is that we as humans do still have control. There's a lot of catastrophic discussion about AI that agentic AI is going to take over from humans and make decisions that are systemically bad for humanity. I think we're not quite at that point yet. So, I do have just this belief that if we all have a go, pull up our sleeves, learn how to use the technology, learn how to use it well, and then apply our creative human minds to ways in which it can make the world a better place in just small details – might be that I have to spend less time on emails, or it might be that you get to bring costs down for lawyers while still earning an income that allows them to pay their rent – that we can make real change. So, that's my optimistic perspective.

Karen Finch:

And I'm going to unveil my crystal ball here and ask, what do you think AI is going to look like in the year 2036?

Jeannie Paterson:

Well, I have no idea, we established the Centre for AI and Digital Ethics in 2020 which was before generative AI. And people at that point were saying, why have you done this, AI is not that significant. And yet, a few years later generative AI burst onto the scene and there you go, everybody is interested in the topic of ethical AI and AI regulation.

o idea what we'll be doing in:

Karen Finch:

And don't forget the supervision, which is the most important thing as well.

Jeannie Paterson:

That's right, that’s right – we can't just leave it, I sometimes hear, ‘Leave the AI to the junior lawyers’. That's not how it works. Everybody needs to be responsible for the use of AI, and if you're a more senior lawyer relying on the work of junior lawyers you need to know how they might be using the technology to ask the right questions. And junior lawyers still need guidance and training.

Karen Finch:

Love it. Well there's so much more we could explore Jeanie, but unfortunately that's all we have time for today. So thank you so much for joining us on the show.

Jeannie Paterson:

Total pleasure.

Karen Finch:

And thank you to everyone that's listened to Cross-Examined today. You’ll find links to the resources from the Law Institute of Victoria, Jeannie's research and the Centre for Artificial Intelligence and Digital Ethics, aka CAIDE, in the show notes.

If you found this episode useful, please share it with your colleagues and make sure you subscribe so you don't miss out on future episodes. Until next time, thanks for listening to Cross-Examined.

Chapters

Video

More from YouTube