In this episode of The Law WithAI, host Will Charlesworth sits down with Alasdair Taylor, a solicitor specialising in software, data, and AI, to explore how artificial intelligence is transforming the legal profession.
From the challenges of using AI for legal research and drafting to the risks businesses face when integrating AI into their operations, Alasdair shares practical insights and real-world examples from his work.
Discover how legal frameworks are adapting to AI, including discussions on intellectual property, data privacy, and the black-box nature of AI systems.
Plus, hear about the innovative Serpentine Galleries project, where art, law, and AI intersected to create a groundbreaking interactive experience.
Whether you’re a lawyer, a business leader, or simply curious about AI’s impact on the law, this episode will keep you informed and ahead of the curve. Tune in for expert insights and actionable takeaways!
Top 3 Takeaways from This Episode
The following companies and projects were mentioned in the conversation:
The information provided in this podcast is for general information purposes only and does not constitute legal advice.
Although Will Charlesworth is a qualified lawyer, the content of this discussion is intended to provide general insights into legal topics and should not be relied upon as specific legal advice applicable to your situation. It is also Will's personal opinions. No solicitor client relationship is established by your listening to or interacting with this podcast.
You know, there's some really, really difficult issues where there is no clear answer even within a single jurisdiction within the uk.
And then when you internationalize the questions, you get a range of unclear answers and it does make it very difficult for businesses to determine the right course of action. You're listening to with aifm.
Will Charlesworth:Hello and welcome to the Law with AI podcast. I'm your host Will Charlesworth. I'm a solicitor specializing in intellectual property and reputation manager management.
I'm also a member of the All Party Parliamentary Group on Artificial Intelligence. This podcast is about breaking down and understanding how artificial intelligence is challenging the world of law policy and ethics.
Every couple of weeks I'll be looking at important topics such as how AI is impacting on established areas of practice, how it's challenging the law itself on issues such as privacy and intellectual property rights, how it's raising new ethical concerns, and how it's ultimately reshaping the regulatory landscape.
To help me in this task, I'll be having some candid and engaging conversations with some fascinating guests, including fellow lawyers, technologists and policymakers, to gain real insight into what it means not just for the legal profession, but the commercial landscape and society as a whole. As always, this podcast is for general information purposes only and does not constitute legal advice from myself or any of my guests.
It's also the personal opinions of myself or and my guests. So whether you're a lawyer or just someone curious about how AI and the law mix, you're in the right place.
So let's jump in to keep you informed, updated and ahead of the game. Today I have the pleasure of being joined by Alistair Taylor.
Alistair is a solicitor specializing in information technology, especially software, data and AI, and he's also a partner at Keystone Law. He helps his clients to navigate the increasingly complex nexus between law, business and technology.
Before moving to Keystone, he ran his own law firm. He's also this is how you have time for all of this.
He's also the founder of Docular, a software as a service business built upon the idea that software can make legal documents extremely modular and providing automated legal documents to other lawyers and also to tech businesses. So thank you very much for coming onto the podcast. Alastair, how are you?
Alasdair Taylor:Thank you for having me, Will. I'm very well indeed.
Will Charlesworth:Excellent, excellent, good. With such an impressive and varied background, I mean, you bring a wealth of expertise and also some extremely interesting ideas into this area.
So I'm very keen to jump in. I have so many questions I want to ask you and this is such an interesting area. So I'm going to start off with something quite, quite simple.
But it's often one of the first questions that people have when I speak to them. So how did you become interested in AI?
Alasdair Taylor:Okay, well, it's really, I think, just an aspect of a long standing interest in software. When I was, I don't know, 9 or 10 years old, my mother surprised me with a ZX Spectrum 48K and I was hooked immediately.
I spent, you know, many happy hours as a child writing, you know, text adventures and quizzes and fairly simple programs. But then over time, I think that machine eventually broke.
There was only one way to fix it, which was to throw it out of a second floor window and it would sort of come back to life again and then work for another few months. But eventually I think all of that throwing out the window did for it.
And so I guess I then spent a fairly long period after my Spectrum days without a computer in the house. I think maybe if I'd had a computer all that time, I would have become a software engineer rather than a lawyer.
And indeed, you know, I went on after studying law. I had set up the docular business and for a period then I dialed back my practice and actually went to university to study computer science.
Will Charlesworth:Oh, wow.
Alasdair Taylor:I dropped out having discovered that while some of the courses were very interesting, particularly the programming courses, there was quite a lot of, I suppose, you know, material that wasn't really relevant to me as a, as a, as a, you know, as a working lawyer.
And you know, all of these experiences with software though, led me to have a kind of view that it's one of the few areas of kind of human endeavor where there's a real sort of magic where you can do amazing, surprising things. I mean, my daughter would say it's perhaps just an aspect of human laziness wanting to automate boring tasks.
k it would have been in about:And you know, having read, I read these books, became kind of, I guess, really fascinated by how AI was going to affect the world in general and legal practice in particular. And, you know, at that time there weren't really systems that someone with my levels of technical expertise could implement into a legal business.
But I, you know, for personal reasons, I've spent a couple of years not focused on, you know, watching technological development.
So I kind of came back to more or less full time working at the start of this year and discovered that obviously AI has exploded in the meantime and it's all anyone seems to talk about. So.
Yeah, so I've spent a lot of this year, I suppose, catching up, learning about AI systems and obviously advising my clients on how the law affects their plans and in some cases their products that they've developed without consulting a lawyer.
Will Charlesworth:It is so often the way that innovation drives people forward. And of course there is always a good time to speak to a lawyer.
And lawyers would always say that should be very much at the outset, but that's quite often at the very early stage.
And if clients are developing their mvp, if they're developing their prototype and working product, it can be a bit difficult to speak to lawyers at that stage. Yeah. And how do you, how do you use AI in your, in your general practice at the moment?
Alasdair Taylor:Yeah, so, I mean, I use it, I use AI. I tend not to use it very much for my kind of core legal tasks.
So I find at least the AI systems I've used, which I won't name right now for this purpose for legal research, are not sufficiently accurate. They make things up all the time. The, you know, the hallucinations are, they can be subtle.
So even if you know the subject, you can read a piece produced by an AI, and if you're not careful, you can think it's right. But then you, you realize, oh, well, actually it's looking, you know, this piece of text isn't quite right and it's missed out this thing.
And, you know, it's just I, I haven't found it accurate enough for legal research, with the exception that I will sometimes use AI tools, you know, if I don't really know where to start in a topic, just to get a list of things to research by other means in relation to legal drafting.
I think, you know, the tools are a little Bit better, but equally there, you know, we lawyers tend to have our own way of doing things and what I haven't had time yet to do is to train an AI on my own own drafting, which is what I, what I really want is an AI that will produce thing, you know, will produce contracts, legal documents in the way I would and that needs training on my work. Yeah, yeah. So I found, I would say, you know, in terms of the kind of technical legal tools, they're fun to play with.
I mean I've used much more of my use of AI has been, you know, outside my work.
So I, for example, I like making websites and one of the eternal issues I have with websites is that I am a terrible artist and producing decent illustrations for a website is always a pain. So I found, you know, I've had a lot of fun using particularly stable diffusion to produce customized images which can then be used on websites.
I mean, I think anyone looking at these things can probably work out where they came from, but it's an enjoyable process creating them anyway. And I would also say for general searching I do use perplexity quite a lot. It often gets better.
It's often a better search engine than Google these days, albeit I seem to have a habit of typing Google into the address bar if I'm not thinking and deliberately trying to use something else.
Will Charlesworth:It's interesting how other AI tools are challenging and taking over from Google.
I do read more and more articles now that the end of Google as we know it is nigh and particularly as the next generation is seamlessly using AI and not having to think about integrating it, but they are just picking it up and they're using it. I can see how that, how it potentially takes over, takes over from, and just advances. I suppose it's just the next stage, isn't it?
Alasdair Taylor:Yes, yes.
And I mean I certainly think if that does happen, Google, you know, Google's the process Google has followed of extracting more and more value for itself out of the Internet and the search results, you know, means I don't think many people are going to be sorry about that.
Yeah, so I mean, I think there's, there's a sense though that when we, you know, all of the things I'm talking about here are generative AI systems and you know, generative AI has obviously been the most in the headlines over the last few years.
But actually, you know, in terms of our day to day use of AI, it's very often less obvious, obvious, you know, AI systems that are, you know, showing us what we see on the Internet or analyzing how we behave that or indeed helping determine whether we have a medical condition where actually maybe AI will have more impact over the short, short, medium term.
Will Charlesworth:Yeah. Yes, it's interesting for all of the.
Sometimes the articles that come out there can be a preference for the end of the world is nigh and we will all be destroyed by the machines. There are also equal number of articles which come out which are talking about particularly medical applications.
And I know that speaking with some doctors, they're already working with AI companies within various hospitals to train models and the results are quite incredible in terms of it's just data processing and analysis. And of course, there are a lot of things that AI has to learn to do and that will take varying amounts of, varying amounts of time.
As you said before, it can be quite, it's quite difficult. You can't rely on it certainly for important things such, entirely, such as medicine or legal advice.
But probably at this stage, the most common thing I hear is that it's a good starting point.
It's a good thing that launches you into a topic or a subject or it expands what you're thinking about and you can delve into it and you can do your own research and you can create your own, your own products or your own content. Content from there. In terms of, in terms of, obviously you're a, you're a technology commercial lawyer. What are you finding in terms of your clients?
In terms of how are they, Is it something conscious that you're aware of, that they're approaching AI and they're embracing it and they're asking for specific adv around it? Or is it something that's still not talked about as much? What's, what's it like at the coalface?
Alasdair Taylor:So I think there's a little bit of a split.
ow, who, who have AI predated:So I've, I have one client, for example, a software developer who, who, you know, part of their offering has been AI systems for a long time and other, again, SaaS companies where they have implemented, I think of facial recognition system, which they implemented without really taking, you know, taking advice in advance in that case, rather unfortunately, then, you know, we have other clients that have, you know, sort of projects which have really come off the back of the, of the explosion in interest in AI over the last couple years and they tend to be more, you know, Actually more thoughtful about the legal issues, given that all the, all the noise about legal cases associated with, with the new AI services. I mean, there's, there's a number of different sorts of issues that I'm seeing.
I mean, some of them are, you know, there's obviously the data privacy issues, which, you know, the, and there's kinds of two aspects to those. There's the, or three aspects.
I would even say there's the using, you know, businesses and individuals providing their personal data, which then is potentially, you know, first of all provided to the service providers for some use and potentially even incorporated in some sense into models for, you know, in ways that cannot be predicted or necessarily understood very well for extraction.
Will Charlesworth:The black box nature of AI being, we're not quite sure how it's, how it's processing or how the algorithms are dealing with that, breaking down that data.
Alasdair Taylor:Yeah, A second issue which is, I think a little bit less visible sometimes, but which has come up in my work has been the use of AI to analyze data sets and produce personal data, potentially sensitive or special category personal data, where there was none before.
And a question, you know, a live question here is the extent to which, if you have such a data set and you have in some sense access to algorithms that may be able to extract or create personal data from that data set, do you have personal data, you know, in your hands at that point before you've even done that extraction work? And this, it seems to me that this issue, you know, this.
So it came up in my work in the context of an Internet of Things business, which generated, you know, which collected data about people's behavior, and there's nothing that a human could do looking at that data really necessarily to derive any special category data. But with an AI, it is absolutely possible. So, you know, it puts the client, which is, you know, a relatively small business, in a tricky position.
You know, do they treat all of this data as special category data and, and have to deal with, for example, you know, establishing a legal basis of processing, probably explicit consent for any, for any use of that data. How can they do that in the case of an AI model given that consent?
If you're using explicit consent as your Article 9 basis, your consent should be your Article 6 basis. And all that needs to be withdrawable. I mean, do you retrain your model when someone withdraws, withdraws a consent if it's used in training?
So, yeah, it's really, you know, there's some really, really difficult issues where there is no clear answer even within a single jurisdiction within the uk and then when you try, when you internationalize the questions, you get a range of unclear answers and it does make it very difficult for businesses to determine the right course of action.
I guess one thing I would add to that though is, you know, having become used to dealing with the GDPR and as you will be well aware, will, there were certainly there was periods when it was impossible to be sure if international transfers of certain kinds of data were lawful or not, or it appeared that it wasn't possible to make some fairly standard kinds of international transfer in a lawful way.
I think, you know, technology businesses have become used to, perhaps used a little bit to treating the law as a, you know, a perhaps a relatively flexible set of rules rather than this kind of traditional idea of an absolute set of, you know, clear guidelines that we all need to comply.
Will Charlesworth:We see my lawyer at the outset, we get the stamp of approval and certification and then we march on.
Yeah, yeah, no, that's, that's, that's a really interesting point and I think the clients are probably best, are definitely best served when they keep that dialogue open with you as well.
But obviously as a lawyer that, that puts an emphasis on you making sure your clients are still, are aware of updates because it's an ever shifting landscape, isn't it? I mean, particularly, I mean not just UK and then Europe as well and the US and any other jurisdiction that you want to operate in.
Alasdair Taylor:Yeah. And, and I wonder, you know, the U.S.
are we going to see the same thing as we've seen with data privacy, whereby, you know, it's the states that are they primary. Become the primary actors in the absence of federal action.
Will Charlesworth:Yes, yeah. Which adds another very interesting diamond dimension to it.
Alasdair Taylor:Yeah. Now, I mean, there was one particular project that I mentioned before that.
So my client in this case, which is the Serpentine Galleries, has very kindly said that they're happy for me to talk about this project. Oh, fantastic. And this was, it was one of the most interesting pieces of work I've done this year.
So this related to a project called the Call by the artists Holly Herndon and Matt Dryhurst, which is I think still currently being exhibited at Serpentine Galleries. And this project was conceived with the idea of giving agency to contributors to an AI model.
So the, I suppose the primary output of the project has been a coral AI that through you can, you can prompt to produce choral music. So I.
If you go down to the Serpentine Galleries, you can, you can stand in front of a microphone and make noise and it will sing back at you and this, this model, rather these. I think there's, there's more than.
There's a number of different models, but these were trained using recordings of compositions created by Holly and Matt, which were then sung by choirs around the uk. And what the artist wanted was to give the choristers and the choirs some kind of, you know, control.
Not a control, but, you know, power interest agency in what happens to those recordings downstream.
So the awo, another law firm, were assisting with the data protection aspects of this and I was assisting with the intellectual property rights and mainly focused around performers rights.
So we initially gave some advice on how the, you know, how the intellectual property rights landscape, you know, applies to the project generally, which was sort of, you know, that. Needless to say, there was a lot of question marks and relatively few firm conclusions.
Will Charlesworth:Yes.
Alasdair Taylor:And then we produced licensing and contractual documentation to give effect to this project goal. So it's the first time I've been involved in a project which involved contracts and licenses which are in some sense artwork.
So I'm rather pleased by that.
Will Charlesworth:Sounds like a very interesting project.
Alasdair Taylor:Yes, definitely, definitely. And I think the documentation from that is going to be all, if it's not already, it's all going to be publicly available for. For use.
Will Charlesworth:Oh, fantastic.
Alasdair Taylor:Other projects and the, I mean, what's, you know, one, one aspect of something like that obviously is that we needed to take a very, you know, a clear style and try and get all this complexity and mess of the interaction of intellectual property rights and AI training processes and AI outputs and all that sort of thing and turn that into documents that were really clear that could be read and understood and hopefully agreed to by non lawyers without any difficulty.
Will Charlesworth:That is, that is one of the, that's one of the main challenges is turning something which as, as a lawyer, you're probably quite comfortable with because you've been, I suppose, living and breathing that document as well as you've drafted it. But then, yeah, making that easily understandable to a lay person.
So you have their informed consent to it, which is a very interesting challenge, but very rewarding once you've completed that task.
Alasdair Taylor:Yes, yeah. I mean, another aspect of that project was the idea of a data trustee.
So there's been a lot of academic writings about this idea and you know, about how, because it's, you know, it's having a lot of contribute, you know, when you contribute a small amount of data or IP to a project, it's not really feasible for you individually to exercise can, you know, rights in relation to downstream users.
But if you have some kind of fiduciary or trustee who can exercise those rights on your behalf, then you, you know, that's one way at least of achieving this kind of agency goal.
So yeah, so had this data, the data trustee then became really, you know, it was structured very much as a license to the data trustee, allowing the data trustee to sub license and subject to good faith and consultation and notification obligations. Okay. Yeah. So a very interesting project.
Will Charlesworth:My goodness. Gosh.
And that's something that's nice and tangible because you don't always get that as a lawyer you can work on some projects as being a part of a much bigger puzzle that you never actually see the outcome of or even if you do it, something that's very discreet and it's not something that physically you can see. But at least with the Serpentine project, that's really exciting, you can go down there and fully, fully interact with it as well.
Alasdair Taylor:Yes. Yeah. And it definitely also encouraged me.
I need to get on with trading my, my own drafting model that's going to write my contracts for me in the style I like.
Will Charlesworth:Well, perhaps that's the, the next, next part of that pro. Next part of your project then is to, is to work on that.
I mean from what you've said, as you say, advising in the area of AI or even any kind of technology law requires a lot of creative thinking sometimes as well in adapting the law to an ever evolving and rapidly changing, I mean particularly with AI rapidly changing technology, having to find a fix effectively as to what possibly works best, subject to annoying litigators like myself going to court and having a ruling in a slightly different way or perhaps providing clarity, which is what I like to think it does in terms of.
So for the non lawyers, so for the technology companies and just generally businesses that are looking to adapt and integrate AI more into their operations. Because I suspect that everybody in some ways has already integrated it to a certain extent and whether they're aware of that or not.
But what are the, what are the types of, I suppose, risks or the key things that you'd be saying that people should at least be aware of anyway?
Alasdair Taylor:Okay. Yeah.
I mean, I think the, you know, from a customer perspective that you use the term black box and you know that can, that from a customer perspective that can apply to any SaaS or cloud service. You don't really, you know, you don't have oversight of what's going into it.
And that's, that issue is especially acute with an AI based system because even the developer deployer might not be able to say or predict what the outputs of a system might be. So it helps, though, having said that, to understand as much as possible about the system in question.
You know, how is it trained, how has it been tested and how's it been validated? What, you know, what, what levels of accuracy and inaccuracy can be expected from output, you know, from, from the outputs of that system?
How is the system going to be, you know, is that system static or is it going to be changing through the, through the, you know, the life cycle? And if it's changing, is it, how is that going to affect your understanding of that system?
I mean, there's obviously the vendor side of this as well, where they, a vendor needs to understand how the system is going to be used.
And just as a customer will never have perfect knowledge of, of what's gone into the system, a vendor will never have perfect knowledge of how it's going to be used.
But I think if the vendor and the customer can each sort of understand each other largely, then that's always going to make coming to a deal a lot easier because they can then identify what the actual real risks are and determine, agree who is going to be responsible for mitigating those or where they can't be mitigated, insuring against them, or ultimately, you know, accepting liability.
Will Charlesworth:So the key is to ask lots of questions and to actually get as much as you can, as much information as you can about the system and almost plan that out or map that out. And obviously you can do that with a client if they, you know, if they want some assistance on that, you can sit down and kind of plot those.
If it's data, data flows through the system, inputs and outputs.
And if they're developing a new technology, as you say, take apart as to exactly how much you can say how it works and how it actually processes, I suppose, inputs and outputs, essentially to take everything down to inputs and outputs.
Alasdair Taylor:Yeah, and I mean, I think also, you know, in, just in the, you know, in the phase of purchasing new products, it's important that the customer has an understanding of the possibilities, you know, what can be done in terms of data use. And what can't be done is the application, they're looking for one where they can keep their data entirely siloed.
Or is it something where the application provider to some degree is going to want to use their data or going to require the use of their data for further training? I mean, clearly, if you're, if you're buying a system online without A kind of due diligence and negotiation process.
You should assume that your data is perhaps going to be used in ways that you wouldn't, you wouldn't necessarily like.
Will Charlesworth:No.
And obviously the obligation on a, on a business to ensure that data is dealt with in accordance with the obligations, the regulatory obligations on them is extremely important.
Which is perhaps slightly different to how outside of a commercial business context people have to a certain extent gotten used to, through social media, I suppose, surrendering their personal data. And that's a whole other conversation around that.
But if a new piece of technology, say a generative AI, a large language model is presented to you and it makes things more efficient and easier to use, then I think from speaking with people outside of the legal sphere is that they tend to be mostly generally tend to be quite happy to not have any thought as to what they're inputting into that system. And as long as what they get out is remotely useful, then they're quite happy.
And most of the time you're not paying for it with money, obviously you're paying for it with training their system to become more accurate. And so it's, that's, I suppose that's kind of a, that's kind of a trade off as well.
It's perhaps getting the client, the business to think about it in terms of their regulatory obligations and not. And to ensure that that's compliant outside of, you know, any appealing to consumers as well to make that as frictionless as possible.
Alasdair Taylor:Yeah, I mean it sometimes seems to be assumed though that to train a model with personal data would need, you know, you need in some sense to get consent. And of course that's not necessarily the case.
It could be that you could train on the basis of legitimate interests or perhaps performance for contracts in some scenarios. It's a question that's going to arise though. There always is.
Well, if you're training on a legitimate interest basis, what does your legitimate interest assessment look like?
How do you, how can you balance, you know, the potential for data that going into a system that you, that might through some, you know, even if you've put in place guardrails that might somehow still be got gotten out of that system against, you know, the, in context that you don't necessarily know what they are against the, you know, the rights and freedoms of data subjects? I guess it's kind of difficult for anyone to say, well that's, you're clearly always going to pass that, that, that balancing test.
So important therefore, in any system where personal data is being trained to have you know, a clear allocation of responsibility and liability and indemnities, indemnity provisions relating to, I guess, regulatory actions and indeed private actions over the misuse of data in that context.
My suspicion is, though, that because data, personal data, coming out of models is going to come out in, I assume generally in dribs and drabs rather than wholesale, it may be that providers are taking a view that the real risks there are not as significant as you might think.
Will Charlesworth:No.
And in terms of something I've been asking people recently is there's a lot of things written about AI, of course, and being in the legal sphere, having experience in dealing with and advising clients and advising clients about the law around this.
Is there anything you think that is perhaps not being addressed or not being spoken about enough in terms of AI, which you are very well aware of, and you're kind of maybe quite surprised that it's not being discussed more, or you just think, generally this is a topic which is really very relevant and it needs to be considered further.
Alasdair Taylor:Yeah, well, as you will be aware, will, if you look at the case law on the various AI case law trackers concerning AI, it is predominantly content providers suing AI companies in relation to training materials. And there's also that there's some claims relating to infringement by AI outputs.
And there is a huge amount of discussion around IP and training issues and quite a lot of discussion also around outputs. But I think my personal hobby horse at the moment is actually the application of intellectual property law to the models themselves.
And how should they fit into the intellectual property framework?
I mean, currently they, as with other aspects of artificial intelligence systems, the kind of core categories of intellectual property law generally and copyright law in particular, don't map very well onto what's going on when someone trains or uses an AI model. And the AI model itself is not, as a matter of English law, it's not very clear that it's protected by anything.
You know, there's an argument that copyright might protect an AI model, but in that case, you know, you need an author and you need some originality and you need that model to be an original literary work. And yeah, it may well not be, depending what the courts say.
An alternative approach is to treat models as databases, but they don't because they're not, because of the way they're organized. Although they are a data set of sorts, they don't clearly fall within the definition of database. Right, either.
And my preference, I think, I think over the past sort of, I guess, 40 years with the information technology Revolution. We've seen copyright, you know, evolve and it is, you know, been extremely successful in taking account of new technologies.
My intuition is that it will be less successful with AI because the, because some of the fundamentals are so different because concepts such as, you know, originality. What does it mean to talk about a model being original? What does it mean to say that you're training a model that's not copying itself?
The transfer from the training material into the model file is not copying itself. Who's the author in these cases? All of these core concepts just don't really fit very well.
My feeling is that it would be really nice if we had some new model, right. Something like database, right.
That wasn't as, so not as powerful as copyright, but that gave some sort of protection for the, for the economic investment involved in creating a model.
Of course, for that to be successful we need some international cooperation which, you know, whilst there are obviously people working on these things, you know, the international environment at the moment isn't one where, you know, new treaties on intellectual property rights and so on are seem to be imminent.
Will Charlesworth:No, no. So definitely a case of watch this space. But perhaps let's not hold our breath too much.
Alasdair Taylor:Yeah, I mean, I mean given the preponderance of case law in the U.S. it may be that the world, or at least Europe in this case ends up following the US courts.
You know, there's been a bit of an expectation that it might be the same as with, you know, the GDPR and the US sort of follows begrudgingly behind the eu. I wonder if it'll be the other way around this time though.
Will Charlesworth:Yes, quite possibly. Quite possibly, yes. And thank you very much for that. That's a very interesting point to probably finish on.
If people want to contact you, how do they and how do they go about it and also let us know the details about Docular as well.
Alasdair Taylor:Okay, well if anyone would like to speak to me about any, well, any software, AI data, legal matters, they are very welcome to contact me. My contact details are all on the Keystone Law website and also on docular.net if you fill in the form there, that will come through to me.
So yeah, I'm very happy to talk about anything to do with technology and law.
Will Charlesworth:That's fantastic and thank you very much.
I extend my deepest gratitude for you coming on and talking about this and showing your expertise and also some practical real life examples of how we can adapt and to use the law to advise on real life and very interesting, very interactive AI projects as well. So thanks very much, Alistair, for taking the time to join me on the podcast. I really appreciate it.
Alasdair Taylor:Thanks very much for having me. Will.
Will Charlesworth:Thank you. And thank you everybody for tuning in as well.
And don't forget to like and subscribe if you haven't already, to the podcast and I will catch you in the next episode.