In this episode Katherine Apps KC discusses online safety and online harms with Jessica Zucker, Director in the Online Safety Group at OFCOM, the UK’s communications regulator. They discuss the nature of online harms and both the existing legal powers that apply to UK established Video Sharing Platforms, such as TikTok and OnlyFans. They also discuss Ofcom’s new powers under the UK’s Online Safety Act 2023 and key policy and legal considerations related to issues such as the spread of misinformation, freedom of expression, proportionality, and international convergence and divergence in regulatory standards and the need for those in tech and regulation to work together.
AI and online harms. Hello, and welcome to the 39 Essex Chambers AI and the Law podcast series.
I'm Katherine Apps KC, and in this episode I'll be discussing AI and online harms with Jessica Zucker, who is director in Ofcom's Online Safety group, where she leads teams responsible for implementing the new rules for online safety targeting tech companies operating in the uk. Thank you very much for joining me today, Jessica.
Jessica Zucker:Thank you so much for having me, Katherine. It's a pleasure to be here and to be speaking with you.
Katherine Apps KC:Now, before we talk about the new legislation and Ofcom's powers, I know you've had a really fascinating career before you came to Ofcom. Are you able to tell me how you got to ofcom?
Jessica Zucker:Absolutely. It was a bit of a strange journey, unexpected to say the least. I actually started my career thinking I wanted to work in national security.
So after uni, I moved to South Korea doing a Fulbright scholarship and was very much focused on the sort of traditional East Asian security issues. And at the time that I was there, there was a cyber attack that was launched by North Korea targeting South Korean infrastructure.
And for me, that was a wakeup call for where the direction I thought that technology was going to play with national security issues.
And my ears started to perk up a little bit and think about if I want to work in the space, I want to understand how technology is going to play into this.
And so ended up focusing my studies on this sort of intersection of national security issues and emerging tech at the time, which was really focused around CyberSecurity.
And my two interests were seeming to diverge for some time until there was a very interesting moment where the North Koreans attacked Sony Pictures because of a film that they were about to release called the Interview, which featured a supposed fake assassination attempt on Kim Jong Un.
That's kind of where my worlds collided, where you had this North Korea national security issue, cybersecurity becoming a big challenge for different governments. And so ended up wanting to kind of specialize in this space.
I went to Microsoft for my first stint in tech in order to really understand how the technology works in practice. And I spent several years there working on cybersecurity policy, cloud issues, cybercrime issues.
And it was really fascinating in sort of entry point into the tech space around that time. That's when issues around misinformation and polarization started to become increasingly important in the role that it had in elections.
And I really felt like that was the problem of our generation and so decided to make the transition to Meta, or formerly known as Facebook, where I was actually on the team, normally called sort of trust and safety is the industry term, where I was responsible for writing the rules for what's allowed and not allowed on Facebook. And Instagram was there during some of the most challenging crises and interesting times, from the COVID 19 pandemic to the invasion of Ukraine.
There was a coup in Myanmar, a civil war in Ethiopia.
And my job was really to think about how these global rules applied in all those different contexts in a way that was fair and balancing safety and freedom of expression. And then eventually, as you pointed out, I moved to Ofcom about two and a half years ago.
And the reason for that move was really thinking about how I had gained such specialized industry expertise, having worked in these places, and that regulation was inevitably going to happen.
And I had this sort of expertise that can make regulation better by making sure it was informed by how the technology actually works, how platforms actually work, the challenges that they really face. And I really wanted to ultimately increase the kind of scale of impact that I could have.
I could do the best job that I could at one company, a albeit a very large one. But Ofcom gave me the opportunity to do this at a totally different scale and influence an entire industry.
And so that's really why I decided to take up this role as online Safety Policy Director and why I've been here and really excited about the work that we're doing and excited to talk to you about it.
Katherine Apps KC:Excellent.
Well, this is a fascinating career, going from sort of within tech and national security, then writing the rules for a company, and we'll talk a little bit about how companies can actually police the vast amounts of content that's otherwise out there being generated and so on, and then moving across from there to the regulatory piece. And we're going to start, if it's all right with you, to talk about your work with Ofcom and where we're at with that now.
Lots of our listeners will be familiar with Ofcom and might even have worked for Ofcom in the past. But in case any aren't familiar, can you just summarize what is Ofcom and what powers do you have in this area? At the moment, ofcom is the UK's.
Jessica Zucker:Independent communications regulator and we've had the last 20 years experience regulating different communication sectors, including TV, radio, fixed lines, telecoms, mobile, postal service, and more recently, we've taken on the position as online safety regulator following the Online Safety Act's passage through Parliament at the end of last year.
And what the online Safety act does is it makes companies that operate user to user or search services or pornographic services legally responsible for keeping people safe, and especially children, on their platforms. And it's Ofcom's job to implement these new rules. It's an incredibly long, complex and quite novel piece of legislation.
It covers a huge range of harms online and we estimate that it will cover over 100,000 services. And many of these companies are not even headquartered in the uk.
And one thing I think is always helpful to kind of try to clarify when I'm describing the Online Safety act, is that it's not actually a piece of regulation that says that Ofcom has to mandate companies telling them specific content that needs to come up or be taken down.
It's not about investigating individual complaints, as we do in our broadcasting standards regulation, but rather our role is to really tackle the root causes of online harms, whether that's harms that are illegal under UK law or harms that might be legal but harmful for children.
And the way that we're approaching this is by focusing on helping the platforms that we regulate improve the systems and the processes that they put in place to address these issues.
And this is really important because seeking systemic improvements will sort of help us reduce risk at the kind of scale that we're looking at, rather than focusing on individual specific posts or pieces of content.
And with that kind of scoping in mind and what the Online Safety act is intended to do, we've tried to boil down our goals to really four key things, and that's what we're going to be focused on over the coming years. And the first is mandating that platforms put in place stronger safety, governance and risk management systems.
The second is ensuring that these services have designed and operated safety at the sort of forefront of their thinking. We want to increase choice for users so they can take more control over their online experiences.
And really, we really want to build out trust through greater transparency about how these services operate.
Katherine Apps KC:Is that in terms of using information gathering powers, requiring platforms to provide you with information about how they operate?
Jessica Zucker:Yes, it's both providing us with information.
These are the kinds of things that will help us better understand compliance issues, but it's also about us to be able to use our powers to publish information.
So the transparency reporting requirements that we'll be able to put in place in the coming years, I think will be a really important tool for us in giving people unprecedented levels of information about the services that are operating, for parents to be able to make more informed decisions about what platforms they want their kids to be using.
But it also, I think, will really help empower people with financial interests and services, whether that's advertisers or investors or employees themselves, to be able to have a better understanding of how these platforms are operating and demand change.
Katherine Apps KC:Am I right in thinking none of the duties under the new act are actually in force yet? They haven't been commenced as yet. You're in the consultation phase with that.
Are you able just to give me a bit of a timeframe in terms of when those duties come in? But also, am I right in thinking you've also got some legislative powers already in this space?
Jessica Zucker:Yes, that's right. I'm going to start with the last part of your question.
So you're right, we have been regulating video sharing platforms that are established in the UK for the last several years.
This is a subset of online safety regulation that comes as a result of the UK being formerly part of the EU and part of the UK's implementation of the Audio Video Media Services Directive AV MSD. And under that regulation, it's sort of a smaller version of the Online Safety Act.
It's only applies to certain video sharing surfaces, only a certain set of companies, that the duties in place are a little bit narrower. But you're right, we have been regulating some of the biggest video sharing platforms, including TikTok and Twitch OnlyFans.
These are some of the platforms that we've been working with and we have already seen the ability to drive change and tangible change on those kinds of platforms.
At the same time, we're in the process of rolling out our implementation of the Online Safety act, which will eventually supersede the video sharing platform regulation.
And what we're doing is implementing in three sort of phases, and this is based on instruction that we've had from Parliament about prioritizing the most severe types of harm first. And so the first phase of our implementation has been our proposals on illegal harms.
So these are things like child sexual abuse, material terrorism, hate speech, grooming. These are the kinds of things that we're going out the door first.
ety act was passed in October:We received thousands and thousands of pages of comments with many, many different suggestions for updates that we should put into the codes. Lots of evidence that we need to go through.
And so our team right now is in the process of going through all of that Information and revising, doing additional impact assessments and then we will publish our final set of proposals on illegal harms towards the end of this year or early next year. The second phase of our regulatory implementation is for protecting children online.
We again, we've published our proposals earlier this spring and we are looking at different ways to help reduce the harm to children, things like suicide and self injury, eating disorder content. And then the final phase of our regulatory implementation is for the additional set of duties that will apply to categorized services.
So I mentioned before, the Online Safety act applies to hundreds of thousands of services. There's a subset of these services that we refer to as categorized services. They're the ones with the highest reach in the uk.
They will be having additional duties. So this will be where our transparency regime will apply, as well as other duties such as freedom of expression and fraudulent advertising.
Katherine Apps KC:Goodness, there's an awful lot on your plate, isn't there? So the categorized duties are a bit different, aren't they?
In that they're sort of hard edged duties rather than duties to have processes, their duties not to put out content or not to distribute content. So they're sort of different sort of legal duty.
Jessica Zucker:So they are actually also related to systems and processes.
So for example, the duty on terms of service for platforms, the duty that will apply for these categorized services is that they will all need to have terms of service and that they need to be applying them consistently.
So while we can't tell the platforms what goes into those terms of service, if it's not illegal, that they will be able to make those choices themselves and then we need to hold them accountable to ensure that they're applying those consistently.
Katherine Apps KC:It must be interesting having been someone who's been setting those terms of service in a previous life, it must be helpful to have had that practical experience.
And from the perspective of the platform itself, there's been a lot said about can artificial intelligence automate some of the processes that they have for detecting some of this harmful content, but with something like freedom of expression that includes within it?
From a legal perspective, the law requires a proportionality test to be applied and that's something that requires there to be a legitimate aim for a restriction on freedom of expression, that the measure is appropriate for that aim and that it goes no further than is necessary.
Is that in your view, something that that can really be automated or is it the nature of the law that you can't really fully automate that sort of protection?
Jessica Zucker:I think it really depends on the type of harm or online safety issue when it comes to freedom of expression, I think it's really important to be considering this in the context of the severity of the harm.
So when I used to work at Meta, we used to be thinking about issues around what threshold should we be setting our classifiers, what precision level is appropriate. And often the kinds of decisions that we made then was about thinking about how you would offset freedom of expression with a certain type of harm.
So for something like child sexual abuse or grooming, you might want to leave very little room for freedom of expression there, because what is the expression that you wouldn't be legitimate in that kind of situation.
But for things that where there's a little bit more nuance, like misinformation, where you can understand that there might be sort of a fine line between somebody who's intending to be false, somebody who's using something false in a way that's satirical or unintentionally false. And so that's where you might want to bring in a little bit more nuance around allowing for more expression.
You can set through quantitative numbers, really, by saying, this is the percent confidence that we want our classifier to have before it makes an automated decision without a human. So that's sort of one example. But I also think that AI is a lot better in some areas than it is in others.
So when it comes to identifying things like nudity and classifier that's using AI is probably going to be a lot more accurate because it's a little bit more black and white than when it comes to something like trying to moderate for misinformation or speech issues, where there's more nuance in the way that people communicate and talk.
Katherine Apps KC:Can I ask you now a bit more about those procedural GTs that platforms will have under the Online Safety act and what you mean in practice by regulating systems and processes.
Jessica Zucker:Systems and processes. It's a bit of a wonky term I can definitely understand.
But what we're thinking about here is related to the duties that platforms have under the act, which is to prevent the spread of illegal content and protect children from content that might be harmful to them.
The onus is actually on the companies, rather than on the regulators, to say what specific safety measures they need to take, given the risks they face.
So what we're doing is we're requiring them to carry out risk assessments, and we're then writing corresponding codes of practices and guidance that can then aid platforms in deciding what measures that they should be taking.
But the idea really is here that platforms need to understand the risks that are unique to their platforms and then make decisions about what they can do to mitigate them.
So the kinds of measures that I'm talking about that we would consider a system or a process are things like having an easy and transparent way for users to be able to report harmful content that can then be reviewed by the platform.
It might have different ways to moderate content, such as blocking accounts that might be associated with prescribed terrorist organizations in the uk.
When it comes to protecting children online, some examples might include giving children more practical tools to be able to control their experiences, such as being able to decline group invitations, or being able to block or to mute user accounts or disable comments on their own posts. So these are the kinds of examples of a system, a process that we think can really help improve user safety online.
Katherine Apps KC:And one of the issues that presumably you might face is that lots of these platforms, they don't just operate in the uk, they operate globally.
And even if they are established in one or two individual companies, there will be people who will be engaging with them, users who are present worldwide.
How do you go about assessing what your territorial jurisdiction, your territorial reach is and deal with this issue of this being a truly international issue?
Jessica Zucker:It is certainly a challenge. And I know that this is really top of mind for platforms.
I attend a lot of different industry conferences and this is one of the top issues that is always discussed. I think platforms are really worried about divergent compliance challenges or where they may have to make changes in one country but not in another.
And we know that this is going to be a really important part of us being able to promote compliance. So it's in our best interest to see where we can find overlap and try to decrease divergence.
But the Online Safety act is UK law, so that is the limitations. We're here to protect users and platforms that operate in the uk.
So if a platform doesn't operate in the UK or provide services to UK users, then there's not much that we can do. But we have spent a lot of time trying to think about this issue of international divergence of regulatory approaches.
So some of the things that we've done, we've actually started a organization called the Global Online Safety Regulatory Network, which is one of the only global dedicated spaces for us to be able to coordinate amongst sort of like minded regulators.
So other regulators who are approaching in a way of looking at systems and processes rather than operating really strict takedown regimes, which is another approach that some countries have taken when it comes to online safety.
But we are ultimately trying to seek alignment of approaches where we can, and we really think that will help, you know, improve compliance for companies as well.
Katherine Apps KC:So, Jessica, if I could ask you to just sum up in one or two sentences Ofcom's approach to the international question and the internationalization of this technology, what would you say that is?
Jessica Zucker:So, our responsibility is to ensure the online safety of UK citizens and consumers. And if we need to go further and faster than what other regulators are doing, we won't hesitate to do so.
And so our international team's guiding philosophy, which I think is quite nice, is align where possible and diverge where necessary.
Katherine Apps KC: earlier this month, in August:Now, obviously, you're the regulator, you don't legislate, but is there anything that you've been able to do practically, in terms of what has been recently going on in the UK in terms of your existing powers, or is there anything that you'd be able to say about that call for legisl of change?
Jessica Zucker:It was certainly really harrowing to watch those events unfold, both in the news as well as in our role as regulator. And I think it has provided us with a really interesting opportunity to think about how the Online Safety act is implemented in practice.
And one of the key roles that we can play in these kinds of contexts is through using our supervisory relationships with platforms.
So supervision is an approach that other regulated services will take, where regulator works closely with regulated services to try to better understand how that service works and promote compliance, assess issues of compliance. And we've had a supervisory team stood up for a number of years now.
And so as soon as those events unfolded, we ended up engaging with many of the biggest services where we saw some of these issues playing out to better understand what they were doing to address the risks that they were facing.
On the back of that, we published an open letter to online service providers that operate in the uk, basically raising awareness about the increased risk that their platforms might be being used to stir up hatred, provoke violence, commit offenses, in the recent context of those riots.
And the goal of this letter and the work that we were doing in our supervisory conversations with them is reminding them of the responsibilities that they will have once these kinds of codes are enforced, and reminding them that there's nothing stopping them from starting, that they don't have to wait for the duties for illegal harms to commence, for them to start putting in place these changes.
And so we really hope that they will continue to take proactive steps and continue to engage with us as the months go on through our implementation phases.
Katherine Apps KC:Can I ask you a question about what sometimes people call information asymmetry?
That the platforms and the tech companies developing the software often know far more about what it is and what it does than users and potentially regulators. What information you have actually access to and if you're given, you know, lots and lots of code, what in reality can you really do with that?
Can you talk to me about how does Ofcom go about navigating that issue of information asymmetry both in the social media landscape and in relation to AI specifically?
Jessica Zucker:Information asymmetry is going to continue to be a problem, I think, for many industries that are regulated, where often the technology is advancing quite quickly and onus is really on the regulator to make sure that they have the right expertise in how to do the job. And I think that's something that Ofcom's acutely aware of and has taken real tangible steps to try to address.
So some of the things that we've done is we've had a real focus on making sure that we're hiring the people with the right expertise in the space.
So bringing in more people from tech like myself, but we also have people from Twitch and Google and Salesforce and ensuring that we're trying to have sort of a broad range of expertise as well as people who have been in other regulated industries or have come from places like the FCA or the CMA or the ICO who have a lot of experience regulating that can help us build up our practice. We've also been investing in building out our own tech tools.
We launched a tech hub in Manchester, which we're hoping will really deepen our expertise and our ability to understand and test different technologies. It's also showing up to these spaces that have traditionally been mostly people from tech.
So when I first joined Ofcom, I went to one of the first trust and safety conferences in San Francisco. Ofcom was the only regulator that was there. And this was a global conference.
And so being in the room for these kinds of conversations and hearing industry talk to each other is really important for us to learn.
And then again, kind of our supervisory relationships, this is another place where we can use to learn and we'll do lots of teach ins with various companies that we're working with to better understand the technology.
So it's something that we're very aware of that is a challenge and one that we are taking really active steps to try to make sure that we're mitigating, to ensure that our approach to regulation is reflective of where industry is.
Katherine Apps KC:Can I ask you a question which we've been asking of all of our guests in this podcast series, and I'm not asking you on behalf of ofcom, I'm asking you ideally, as you if you're happy to answer it. The question, it's a scale question.
On a scale of 0 to 10, from pessimistic to optimistic, how do you feel feel about AI and where it's going to go in the future? So zero would be the most pessimistic about its impact on society. Nine, ten would be the most optimistic about it.
Are you happy to put yourself on that scale? And if not, are you able to tell me where you fit in terms of your feelings about the technology with.
Jessica Zucker:Any kind of technological development?
I think it's a bit of a mixed bag, so I'm not sure that I can necessarily give you a number because it feels like there's so many opportunities as well as challenges on both sides. So maybe that puts me a little bit in the middle. But I think with AI there's been so many really interesting developments.
It's really truly democratized information.
You know, I find myself using it so much on a day to day basis to help me with refining my writing or helping me finesse or boil down news articles, to help me digest big reports, things like this.
And I also think that it's having a really positive impact and helping scale some of the online safety protections that we've talked about, whether that's through improving classifier accuracy and automating content, moderation, things like that. But with every kind of major technological development, there's also the risk and the downside.
And we've already seen a lot of those risks, particularly with things like deep fakes and the way that those have been used to really harm people. So I definitely think that you can see both the optimism and the pessimism for this kind of technology.
But I will say that one thing that I am very optimistic about, which is quite different than I would have said with previous issues like social media first getting developed, is that we are alive to the challenges this time.
Everyone across industry, whether it's regulators, nonprofits or researchers, governments, the tech companies themselves, I think everyone is really alive to thinking about the importance about AI safety in a way that I never saw that being done when social media was first being developed, and that gives me some hope and sense of optimism.
Katherine Apps KC:Thank you so much Jessica. You have been listening to Jessica Zucker, interviewed by me, Katherine Apps, on an episode of AI and the Law, a 39 Essex Chambers podcast series.
For further episodes, please go to 39essex.com or to all major streaming platforms and podcast providers. And if you have any further thoughts or ideas for the series, please could you follow the contact details on the 39essex.com website.
Thanks again Jessica, and thank you for listening.