Artwork for podcast Cross-Examined
Wired for Justice: AI in the Victorian court system
Episode 12nd March 2026 • Cross-Examined • The Law Institute of Victoria
00:00:00 00:38:18

Share Episode

Shownotes

Are “robo-judges” fuelled by artificial intelligence set to decide cases in Victorian courts? Absolutely not, according to The Honourable Richard Niall, Chief Justice of the Supreme Court of Victoria, but AI technology does present significant opportunities for the judicial system.

In this conversation, we delve into how Victorian courts are already using artificial intelligence, including several promising pilot projects, as well as future opportunities for efficiency, staff wellbeing and cost reduction. We also discuss the risks, including hallucinated citations, deepfakes, data privacy and unlicensed legal practice.

Guest:

  1. The Honourable Chief Justice Richard Niall of the Supreme Court of Victoria
  2. www.supremecourt.vic.gov.au

Host:

  1. Karen Finch, Head of Legal Policy and Innovation, Law Institute of Victoria
  2. kfinch@liv.asn.au | www.linkedin.com/in/karen-finch


Episode Overview:

In this episode of Cross-Examined, Chief Justice Richard Niall of the Supreme Court of Victoria discusses the early implementation of AI in the court system in Victoria and how legal technology and court innovation are beginning to align in promising ways.

One example of experimental AI use is the pilot program at the Coroner’s Court, which uses tech to summarise large volumes of frequently traumatic statements to speed up investigations and reduce staff exposure to distressing material. Another is the judicial use of AI in Supreme Court’s exploratory work, where AI is used to assist judges with tasks like identifying competing arguments and summarising evidence.

The Chief Justice points to other opportunities of technology, including reduction in legal costs and facilitating broader access to justice for the Victorian community.

The risks of AI in legal practice are very real and widespread, including hallucinated citations, deepfakes, privacy concerns and unlicensed legal practice. The Chief Justice calls for a measured and careful approach to AI adoption, while also emphasising that AI must only ever augment, never replace, human analysis and judgment in decision making.

Topics & Timestamps:

  1. [00:24] Intro and guest welcome
  2. [02:07] Current level of AI adoption in the Victorian justice system
  3. [04:36] AI pilot projects under way in Victorian courts
  4. [11:35] Hallucinations, deepfakes and unlicensed legal practice
  5. [25:05] The role of the judge in an era of AI
  6. [33:54] How AI could influence costs and access to justice
  7. [35:44] The future of AI and the law


Key Takeaways:

  1. Victoria is taking a careful but optimistic approach to implementing AI in courts and tribunals.
  2. AI is already being tested in Victorian courts, including a pilot in the Coroner’s Court that uses AI to summarise case material.
  3. The Supreme Court is testing AI on completed cases to help summarise evidence or identify key issues.
  4. The dangers of using AI in the legal system include hallucinated case citations, deepfakes, privacy concerns and unlicensed legal practice using AI tools.
  5. AI policies must remain technology‑agnostic and flexible, so they don’t become obsolete as AI capabilities rapidly advance.
  6. AI must augment, not replace, judicial decision making.


Resources & Links:

  1. LIV Artificial Intelligence Hub – essential and up-to-date AI resources for Victorian practitioners
  2. The Supreme Court of Victoria
  3. Ethical and Responsible Use of Artificial Intelligence – LIV AI Ethics Guidelines
  4. Artificial Intelligence in Victoria’s Courts and Tribunal – Report from the VLRC
  5. Technology at the Court | Coroners Court of Victoria
  6. Artificial Intelligence (AI) in the Law for Legal Practitioners – The Law Library of Victoria
  7. Supervising AI” – LIJ article by the Legal Practitioner’s Liability Committee
  8. AI and democracy” – LIJ article by The Honourable Justice Melissa Perry

For the latest insights on Victorian legal developments and to hear directly from leading voices in the profession, subscribe to Cross-Examined on Apple Podcasts, Spotify or visit the Law Institute of Victoria website.

Follow us on LinkedIn for legal insights and episode updates.

Enjoyed the episode? Leave a rating to help other legal practitioners find and benefit from the series.


About This Podcast:

Cross-Examined is a new podcast from the Law Institute of Victoria. Tune in to hear experts discuss hot topics in the law and the changes shaping the legal profession. Regular episodes will cover everything from AI and cyber threats to ethical dilemmas, workplace taboos and practice management insights.

This podcast is recorded on the traditional lands of the Wurundjeri people of the Kulin Nation. The Law Institute of Victoria acknowledges the Traditional Custodians of Country across Australia. We pay our respects to Elders past and present.


Disclaimer:

This podcast is for informational purposes only and is not intended to replace professional legal advice. The views expressed in this podcast do not necessarily reflect the views of the Law Institute of Victoria (LIV). The LIV is not responsible for any losses, damages or liabilities that may arise from the use of this podcast. Listeners should seek independent legal advice for their matters.


Production Information:

  1. Produced by: The Law Institute of Victoria
  2. Producer and audio editor: Garreth Hanley
  3. Music: Garreth Hanley
  4. Copy and show notes: Louise Surette

Connect With Us:

  1. 📧 Email: podcasts@liv.asn.au
  2. 🌐 Website: www.liv.asn.au
  3. 🔗 LinkedIn: www.linkedin.com/company/law-institute-of-victoria
  4. 📱 Apple Podcasts: Cross-Examined - Podcast - Apple Podcast
  5. 🎵 Spotify: Cross-Examined | Podcast on Spotify

Transcripts

Karen Finch:

Welcome to Cross-Examined, a new podcast brought to you by the Law Institute of Victoria. My name is Karen Finch, I'm the Head of Legal Policy and Innovation at the Law Institute of Victoria.

Today we're exploring one of the most pressing questions facing our courts and our profession right now. How do we harness artificial intelligence without losing what makes the judicial system work? This topic is no longer an abstract problem. Practitioners have already brought hallucinated case citations to the bench. Courts are running pilots, judges are grappling with whether it's acceptable to use AI to summarise evidence, recommend sentences or draft reasons. And the pressure to adopt this technology at scale is enormous.

Niall took office in February:

The Chief Justice has appeared before the High Court in human rights matters including asylum seeker and refugee cases and has received multiple awards for his work on refugee law advocacy and has extensive practice in environmental law.

Chief Justice, welcome to Cross-Examined.

Chief Justice Niall:

Thanks very much Karen, pleasure to be with you.

Karen Finch:

We are so happy to have you here so let's dive into it. Taking a broad view, what's the current state of AI adoption that you're seeing in the justice system, in legal practice and in courts and in your view where the Victorian position is?

Chief Justice Niall:

Yeah, thanks very much Karen. I think, in Victoria, the level of adoption, I think in the profession is high and growing and in the courts, it's at an early stage. So we've seen a couple of small examples, which we might talk about during our discussion, but that's still a very much work in progress.

In terms of the court system, Victoria, like all of the states and the federal system, I think are currently taking a relatively cautious approach, but certainly in Victoria, we are looking positively at opportunities to which AI might help bring about a more efficient, more just and more expeditious use of court resources. And that process is ongoing, and I think it will continue to be an important aspect of our work.

So in a court system, I think you could probably see AI having potential in three different areas. The first one is administrative tasks. So each court, the Supreme Court, County Magistrate, Coroner's Court or VCAT, each obviously has a large registry component about receiving cases, processing those cases and making them available to the judicial officer in a coherent way. So there's registry.

Then there's adjacent to the judicial task aspects that court staff might presently been doing. And then there's the more specific area of the judicial task, actually hearing the evidence, identifying the relevant principles and applying it and deciding the cases. So there's sort of various areas at which the courts might adopt AI with quite different functions and quite different risks and opportunities.

At the moment, we've got some but little use of AI in the registry side of it in any of the courts, but that's certainly somewhere where we'll be looking to expand and grow and take the opportunities for that level of management.

In terms of the judicial process of deciding cases, it's too early and it's not currently being used in any mainstream way. But there have been a couple of pilots which are worth identifying and those pilots really illustrate both the reasons why we're looking to AI and the opportunities and risks that it presents.

Karen Finch:

So can you walk us through some of say the technologies that are in use or being piloting those very early stages in the Victorian courts and what benefits and risks are you seeing or is it just too early to tell?

Chief Justice Niall:

So, one of the more interesting early pilot schemes has been in the Coroner's Court and that is operating in a way which really reflects the problem that the court was seeking to address. Where the Coroner has to make an investigation into a death, the brief to the Coroner will contain a number of statements. Often they won't be contentious, but there'll be a number of statements describing the circumstances surrounding the death or the circumstances of the deceased. Some of these can be very confronting. Often particular facts are not controversial, but the police will have obtained statements from various people. Often in describing very traumatic scenes.

Now, the process of the Coroner's work is they need to process those, synthesize it and provide a report in relation to the death. And the Coroner's Court have piloted a program where the statements, the brief is summarized under specific headings and synthesized into a report. That's not the final word. It's obviously subject to a whole lot of human oversight and decision and doesn't represent the decision, but it enables large amounts of information. Often, as I said, not controversial to be synthesized and produced in a report form. That has a number of advantages. One is, streamlines and quickens the process and in a lot of circumstances, there will be family members and community members who will be very concerned to have this process as completed as quickly as possible. So speed and efficiency is one aspect. The second aspect is that staff within the Coroner's Court being confronted with very traumatic material. And the ability to reduce that exposure to that material, we think, has a real opportunity to reduce the trauma of working in that very stressful and difficult environment.

So there are two problems, speed and access to traumatic material, which we're seeing AI used to try and improve one and reduce the other. And that's been, I think, the reports so far that it's been a really successful, interesting and important pilot.

In the Supreme Court, we're currently looking to very initial stages of a pilot, which will just enable us to assess how AI might be used. And we're using some completed cases to try and work out how some of the product might help judges summarise or identify competing contentions, summarise aspects of the evidence as an adjunct or as a part of the process of decision making.

And again, the idea there, is to make more efficient the process but not to substitute AI for the judge. It's really to augment and make more efficient process. And in that respect, we're seeing, in the Supreme Court for example, it's been a constant problem that litigation is too expensive and takes too long. And we haven't really been able to crack that nut of getting significant advances. We've done a lot of work on case management over the years, a lot of work on ensuring that practitioners and parties focus on the critical issues, and that's been really effective. But we still have the continuing problems of expense and time. AI, I think, presents an opportunity to address two of those critical issues. But it comes with risks and we've got to be careful.

Another aspect that's worth mentioning is that the Law Library of Victoria, which is a wonderful resource, has developed access to a number of AI tools which are available to those with access to the law library. Those tools are really directed to legal research and a couple of commercially available products that are specialist legal databases with AI components available to users of the library. That's a relatively recent thing but I'm sure that take up of that will only increase.

The quality of the legal databases is, AI databases is improving. One of the developments of the databases is they've tended to be jurisdictional specific so a number of the large legal publishers have tended to start with the United States first but will certainly see a lot more Australian databases included within those legal databases and I think there will be a useful tool for practitioners and others alike.

Karen Finch:

So turning our minds to the risks. One of the risks our listeners will be well aware of is the recent cases where false or what we call hallucinated, AI generated citations have been presented to the bench. And the recent release of the Victorian Law Reform Commission's report into the use of AI in courts and tribunals has flagged this as a significant problem. What do you see as the practical challenges facing the courts as AI becomes more embedded into legal practice and the broader community?

Chief Justice Niall:

Yeah, well I'd like Karen at the start to acknowledge the hard work of the VLRC and their very large and significant and comprehensive report. And I think that will really help inform the approach of the Victorian courts. The Victorian profession has had a long history of innovation and adoption of technology to try and improve and make more efficient the court system, the justice system and advising clients. So our approach, that is the courts approach, my approach is consistent with that long history of the profession being ready to embrace responsibly innovation. And the report of the VLRC has really identified some issues and opportunities that we need to address as we look to the adoption of AI in the state.

As our listeners will know, AI accesses or obtains vast amounts of material and then identifies statistical probability outcomes given the question or prompt. So it's a large probability generator and it doesn't, in a human sense, have intuition or experience, but it's looking at what's most probable. And so you can get some quite odd results from the searches, including cases, that is, case citations, which don't exist, which the program has effectively made up because the form of the reference will be consistent with practice. It'll be a report from the Victorian reports. It'll be a case name in the form of how a case would be reported, but it doesn't actually reflect an actual case. And that's why they're called hallucinations.

The fact there's a hallucinated case in a submission doesn't necessarily mean that there's a problem in the disposition of the case. But it does mean that the judge will be given material on which they can't rely and it might undermine the confidence that the judge will have that we have in receiving submissions.

So that problem is there. It's known. But the answer to it in relation to practitioners, I think rests with the fact that practitioners are responsible for the submissions they present to the court. They've always been under a duty to ensure that what they tell the court is accurate, reliable and correct. That's not a new obligation, which we're now imposing because of the threat of AI that has always been there. And it'll be a matter to constantly remind the profession of that obligation. And they discharge that obligation in my experience exceptionally well.

Practitioners take their responsibilities to the Court, and to the courts and tribunals, which they appear extremely seriously and they take great care in the submissions that are presented. But this presents a risk and it particularly presents a risk because there's no doubt that AI can generate documents very quickly.

And there's a risk with busy practitioners operating in areas where they're under pressure or perhaps not as familiar with will risk using AI as a substitute. And that's something that I think the profession are learning is a potential problem. So it's there. It's known. And I think it'll be addressed through professional responsibility in relation to self-represented litigants. It's also a problem.

The irony of AI and the availability of AI is that there is a risk that it'll only be available to some members of the community. But in fact, AI is now very prevalent and generally available. And our experience in the Supreme Court and other courts is that self-represented litigants are using AI to help them navigate the process. And that's understandable. But there's a risk that the hallucinated case won't be picked up by the self-represented litigant. But again, it's important that self-represented litigants are also told about the dangers of AI to be conscious that there are risks, to be conscious of the dangers of inaccuracy. And that's one of the recommendations in the VLRC to make guidelines that will help address that question. But judges are very conscious that you wouldn't expect a self-represented litigant to be across all of the authorities. So that is a problem, but we're conscious of it.

The other problem is a slightly different one, and that is the extent to which AI may affect the evidence. And that is deep fakes, that is documents and images which are created by AI, which look believable and accurate, but which are not, which are a fake image.

We're not sure how and whether AI will be capable and systems that we have will be capable of identifying deep fakes, but we do know that the quality and therefore the believability and the plausibility of deep fakes will only continue as the technology improves. So that will be a continuing and existing problem for the determination of cases, but I’ll come to how that might be dealt with.

And the third area, and it's a slightly different area, is one which is not unique to our jurisdiction and the extent to which AI is in effect conducting unlicensed legal practice. If you ask someone for fee to provide them legal advice, they've got to be admitted, have a practicing certificate and be regulated under the profession. And the regulator ensures that people don't hold themselves out as a legal practitioner when they're not so registered and have a practicing certificate. How then do we translate that rule and that regulatory system to an AI environment where people are asking AI to provide them with legal advice?

And we talked a little bit about Victoria. I've joined a couple of committees, one involving the National State Courts of the United States, and we're also working with some courts, including Singapore. And this issue of unlicensed legal practice is one that the American jurisdictions are looking at closely. So it's not unique to us, but it's a problem. So there are these potential problems.

I've identified three and no doubt there will be others. So the question is, well, does that mean we should not be using AI - these problems are insurmountable, that the benefits are overstated and we should have a really prohibitory approach to AI? I think that approach doesn't reflect the reality of the situation we currently are in and are likely to be into the short, medium and long term future - which is that these systems are going to be increasingly complex, increasingly sophisticated and increasingly available.

So we're dealing, within a legal system, high amounts of information that needs to be summarised, prepared, summarised and analysed, or in a non-litigation profession which deals with drafting contracts and other transactional documents and the like. We have to see the reality that AI will be a part of that landscape. And so we look in that context as to how not only we might deal with the problems, but we might also harness the benefits. And those benefits, I think, are going to be that lawyers are doing the analysis, doing the high-end human analysis and work that is so critical to the legal profession and representing clients. And that AI and the tools that are available remain just that, tools to assist the human process of advising clients appearing in court and human judges decide cases.

So, the question is how we do that. And the Victorian Supreme Court was, I think, the first to have guidelines in Australia. It's been something that the court's been very conscious of. And the approach that we've taken is one that is not seeking to be prohibitory, not seeking to be too prescriptive, but to use those guidelines as informing both the public, the profession and litigants as to the risks and also explaining how we might use AI into the future.

Those guidelines were the subject of, I think, favourable consideration by the Law Reform Commission. There were some recommendations as to how they might improve those guidelines. And we're working through that and looking at that to see which aspects that we can improve, because I think as the landscape changes, we've got to be responsive and we've got to be alive to the risks and opportunities. And one of the things I think that's critical in drafting policies is that they have to be agnostic as the particular technology and they have to be careful that they're not obsolete barely before they're promulgated.

So when we look at the development of AI, it's not just obviously a legal product. The level of investment in AI across the world is so great that it's difficult to think other than it will be an evolving and developing area. So the important thing for me, for my part, is to develop guidelines and policies which are fit for purpose now, but which will also be fit for purpose into the future.

Karen Finch:

Chief Justice, one of the problems with advanced AI systems is they operate as a kind of black box. Even the experts can struggle to explain exactly how they reach a result. How does this sit with the core judicial values in Victoria of open justice, the giving of reasons and the ability to challenge or appeal a decision?

Chief Justice Niall:

Yeah, that's a really important question that intersects with a number of aspects. Early on, I mentioned that one of the things where we might use AI or might look to use AI is in registry processes and functioning's. There'll always be room for a very high skilled registry staff, but there may be ways in which AI can help. For example, the Supreme Court is entirely a electronic registry. All our documents filed in the court are filed electronically.

We also have rules as practitioners will know, and it's important that the documents reflect or comply with the rules. We have thousands of filings each year and many, many thousands in the probate jurisdiction.

There may be opportunities for initial assessments as to whether or not they're compliant with the rules or whether or not they're in a proper form might be assisted by, not replaced by, but might be assisted by AI.

Now, the critical outcome there is whether the documents is accepted for filing or not. The particular algorithm that's used for that may not matter that much because the outcome will be subject to human oversight and supervision and the result can be easily tested.

Where we're dealing with how AI may summarise evidence, for example, it's absolutely critical that judges decide cases on the basis of the evidence, not on the basis of summaries of the evidence. We have a system in Australia, a very proud and long tradition of oral trials with evidence given by witnesses. Judges decide in cases on the basis of the evidence in accordance with the rules of evidence and it's the evidence of the witness, not summaries that the judge has regard to.

But there may be systems and there are systems. We know that might summarise evidence as an aid memoir for the judge or as an aid memoir for the practitioners who are preparing and arguing the case. Now, it may not matter in that scenario what the algorithm or how the algorithm does, provided the result is accurate and useful. And I think that that might be a very important development.

On the other hand, you might have a case and we do have cases in which experts are called to give evidence on opinions and the opinion really depends on the quality of the expertise, the quality of the reasoning and understanding what the witness based that opinion on.

In those circumstances, for example, where there's a calculation or a prediction of loss in the future or an assessment of the load of a building construction or a bridge, it might be absolutely critical to know what algorithm was used, what reasoning was used, how the computer program actually dealt with the material.

In that case, black box opacity, that is, that a report is clear and on its face, but you don't know how it was arrived at, would be a real problem because the other side won't know what the basis of the opinion was and the judge will be unable to assess whether it's a cogent opinion that should be accepted or not a cogent opinion.

So, in those cases, there are real questions about black box and about identification of the program and its types.

But we have now, and it's another thing identified by the Victorian Law Reform Commission, we've got a practice note in the criminal division, which identifies the role of evidence and the obligations of evidence to identify the use of AI in the circumstances.

So that will be a really valuable tool, and I think the VLRC recommended that that's extended. So that's another thing we're looking at.

Karen Finch:

Chief Justice, can these nuances then be reduced to an algorithm? And if not, how do you see technology supporting them the human role of judges and lawyers more broadly?

Chief Justice Niall:

Yeah, I think one of the things that I've been reflecting on throughout this process is trying to understand what's critical to the judicial process, trying to understand what judges do.

I think the community needs and requires judicial decisions which are really important, which affect rights, which can determine liberty, are made by a human. That is, I don't think the community would be comfortable with these really significant decisions.

For example, what sentence should be imposed on someone who's pleaded guilty or been found guilty of an offence? Or how much damages should be awarded to an injured plaintiff? Or whether a particular exercise or particular conduct was negligent?

The community expects that those judgments will be done by a human, will be dealt with in accordance with principle by a human judge. So, when one looks at that sort of statement of proposition, you then have to say, well, what is it for a judge to make a decision? What's the process by which they go through to arrive at an outcome?

We know that the selection of evidence and the decision to make arguments and reduce evidence rests largely with the parties. Judges don't go looking for cases. Cases are presented to them and they've got to decide. So, the choice of evidence belongs to the parties.

But the judge has to listen to or read that evidence, has to then resolve disputes in the evidence, make factual findings, then has to understand the law, has to read the cases, read the principles, read textbooks and articles, read the submissions and try and bring about through that synthesis process. And then they've got to formulate the outcome and then they've got to explain how they came to that outcome through a process of writing their reasons so that the community, the affected party, the unsuccessful litigants will know why the judge came to that conclusion. And if there's an appeal, the appeal court will know why the judge came to that conclusion.

So those sequential steps are critical to the judicial task. So, it's difficult to say we don't want AI to make decisions for judges. That is, we don't want AI to be a substitute for a judge. Those bold propositions need to grapple with the role of the judge in each of those steps I've identified, hearing the evidence, resolving conflicts, applying it to the law, coming to a conclusion and providing reasons.

So, when one's in that environment, one's got to be careful that if you substitute any of those steps, that the judge is not personally responsible for each of those steps, then it might compromise the whole outcome. So all of our use for AI has to accommodate the judicial role in each of the steps of the judicial process and then the outcome. And then people can have confidence that that outcome is the outcome arrived at by the judge. And those are the reasons that her honour gave. And those are the reasons that she had for making the orders.

But that's not to say, and I think this is one of the nuances in the VLRC, that each of those processes might not be helped by AI, that is. And to step back a bit, most judges on my court, or in fact all judges on my court on the Supreme Court have associates. So there, as the listeners will know, will be a young lawyers, often early in their career, in my experience, very smart, very capable, very able young lawyers, usually young. And they have quite a significant role to assist a judge and how a judge uses their associate differs from judge to judge.

Some judges might ask an associate to summarise aspects of the evidence. Some judge might ask for a memorandum dealing with principles of law that might be relevant. Some judges might ask the associates to proofread judgements.

So, we don't disclose the particular role that an associate does, but we know and we've got confidence through the judicial discipline that it remains the judge's decision and the judge's reasons. So, I think there is room for AI to assist the associate, to assist the judge in some of those tasks.

One of the conundrums of modern legal practice is that technology has vastly improved or increased the amount of evidence that is available. Documents can run into the hundreds of thousands to millions of pages. And I'm not making it up when I say millions.

So that's been a boon in a sense. And we went through that process over the last couple of decades, whereas the volume of material increased of potential relevance. Discovery of documents in civil litigation became such an unruly beast. And we worked out ways to say, well, we're going to limit discovery, for example.

Now, it seems to me that technology has been part of the environment which led to that problem. And AI may well be part of the solution for managing that vast amount of information without trespassing on the judicial process.

But it may be that, in fact, it'll be the parties that'll do most of that synthesising in process and the judges won't need to do it because we might get more concise, more focused evidence and submissions. So that might be a different example.

We've seen in our experience that AI used by self-represented litigants. I think it can be a real help to them, helping them navigate very complex legal principle. But it also can be a problem because we're seeing, ironically, vastly longer submissions and vast amounts of material being generated by AI, which is not really conducive to anything. So we've got to work out a way to deal with that. But that's another challenge and another opportunity.

Coming back to the sort of more fundamental issue that you identified, Karen, we've got to see and respect the role of judges and lawyers as the human actors assisting real clients. As long as we seek technology supporting that process, not substituting that process, then I think we should see technology as a real opportunity for us.

And I don't want to get to the position, and this is really important, that the community retains trust and confidence in the system. So, I don't want to get to the position where the community is thinking, well, judges are just plugging into AI. So, we've got to constantly be alert to that risk. And that's why adoption of technology that's not cautious, that's not well considered, that's not principled, can really undermine the legitimate use into the future.

But I think we've got a very respected profession, a very professional and highly skilled profession, and a very respected judiciary. And I think we're well capable of seeking to navigate that. Those problems. But I would say that the issues that are identified, that says, look, you have to be cautious of AI, I absolutely agree with.

But they're not unique to us. So one of the things that I've really enjoyed over the last little while is looking at some comparative work from America, Singapore and South East Asia and other countries, where you really do realise that all of these courts are grappling with exactly the same problems with the use of AI. But they're also grappling with the other problems of an overburdened justice system that can be expensive, can be costly and can be very disenfranchising for people.

So, do we have the opportunity of potentially harnessing technology to help in that process? I'm optimistic, but cautiously so. I don't think in doing that, we'll lose sight of the human role of judges and the importance of confidence in the community, in the profession and in the courts. And I think we've got that in Victoria, and I think we need to be open in our thinking.

Karen Finch:

Chief Justice, just looking ahead, what could the impact of AI mean for the way lawyers price and cost legal services?

Chief Justice Niall:

Well, one of my friends who's a partner at a major law firm had been overseas to an AI conference and came back and said that the one thing he'd learnt was that time costing was dead.

So, it'll be interesting to see where we've had a model of time costing. But there are other means of costing, outcome-based, value-based. But one of the things that's important, in a sense, the way lawyers' price and cost is just a mechanical thing. The important thing is that we recognise that legal advice, legal profession, is essential to a properly functioning society. The court system depends on it. And in the area of drafting contracts and transactional documents, commerce depends on them.

So, the profession is valued and valuable and that the skills lawyers have are important, valued and valuable. So, there's no incongruity about talking about the importance of how lawyers can charge. As Chief Justice, I'm really conscious of the cost of legal advice and constantly working on ways that it might be reduced.

The mechanics of it, are we time costing or some other costing? I think it's a matter for the profession to work through within the ethical and professional framework that they operate in.

But I think AI will produce some significant differences. I think early on there may not be significant productivity gains because it'll often be undertaken from my observations at the moment in parallel with the existing work done by young or experienced lawyers.

So, the productivity gains have not yet I imagine, I don't know, not yet materialised. But it's likely that they will and that may reflect access to justice. It may make some legal advice more available, and it may mean that different pricing and costing models are available.

But it won't mean that the value of legal advice and the value of the profession will diminish. It's not about replacing lawyers with computers. I don't think that's on the horizon. And I think it'll be really interesting to see how the profession adopts AI.

Experience will tell you that the profession is relatively cautious and that's a good thing. Very careful because AI, for example, it's a risk that we haven't really talked about, but privacy issues is really important, and confidentiality issues is really important. And so the use of the data where it goes is an issue that obviously those in the professional courts have to confront if AI is to be used.

But the reality is that lawyers' clients will be using AI, governments will be using AI and the profession will also be using, I think increasingly, AI and I think the courts equally will also, at some point, I'm hopeful, will be harnessing AI to make it more efficient, more just and more expeditious, without losing the importance of the essential elements of the human court system that we've discussed.

Karen Finch:

Well thank you Chief Justice. I could really talk to you all day about this topic, but I know that we've already taken up enough of your time. We're so very grateful that you could join us and to be our very first guest on Cross-Examined, so thank you.

Chief Justice Niall:

Thanks very much, Karen.

Karen Finch:

And thank you to everyone listening to Cross-Examined. If you're looking for more information, check the show notes for links to AI resources from the Law Institute of Victoria and everything else we've mentioned in today's show. And if you found this episode useful, please share it with colleagues and hit that subscribe button so you don't miss out on future episodes. Until next time, thanks for listening to Cross-Examined.

Chapters

Video

More from YouTube