Last year was a defining year for technology regulation in the EU, with its Data Act, Apply AI Strategy, and digital omnibus proposal that signaled a shift toward a more coherent regulatory framework. What does all this mean for 2026? Tune in for a breakdown of the top six trends in technology regulation from host Deborah Kirk and colleagues Jonathan Stephenson and Alistair Ho, of Skadden’s IP and tech transactions team in London. “If there's one unifying message here, it's that 2026 is not about invention – it’s about integration,” Deborah observes “The clients we see moving fastest are the ones embedding legal thinking into how their products are built, how contracts are structured, and how systems are governed.”
☑️ Alistair Ho | LinkedIn
☑️ Jonathan Stephenson | LinkedIn
☑️ Deborah Kirk | LinkedIn
☑️ Skadden | LinkedIn | X | Facebook
☑️ Subscribe Apple Podcasts | Spotify | Amazon Music
“SkadBytes” is presented by Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates. This podcast is provided for educational and informational purposes only and is not intended and should not be construed as legal advice. This podcast is considered advertising under applicable state laws.
Welcome to SkadBytes, a podcast from Skadden, exploring the latest developments shaping today's rapidly evolving tech landscape. Join host, Deborah Kirk, and colleagues as they deliver concise insights on the pressing regulatory issues that matter to tech businesses, investors, and industry leaders worldwide.
Deborah Kirk (:Hello there and welcome back to SkadBytes. I'm Debs Kirk and I'm joined again by Jonathan Stephenson and Alistair Ho from our IP and tech transactions team here at Skadden in London. Now, 2025 was a defining year for technology regulation. On the 12th of September 2025, the Data Act came into force, driving organizations really to rethink how they collaborate and compete in a data-driven economy.
(:Then in October 2025, we saw the launch of the EU's Apply AI policy, which shifted the focus from the legislative framework of the EU AI Act, which had entered into force just over a year earlier, over to the practicalities of AI implementation. And in November 2025, the EU introduced the digital omnibus proposal, which signaled a shift toward a more coherent and manageable regulatory framework.
(:Meanwhile, in the UK, Ofcom transitioned into active enforcement of the Online Safety Act, beginning to hold online platforms accountable for compliance with new safety standards after a period of policy development. 2025 also witnessed a cascade of developments at the intersection of intellectual property, artificial intelligence, and digital infrastructure, with growing recognition of the need to align IP frameworks with technological advancements. And 2026 has already been active on the legislative front with further enforcement under the DSA and the OSA, and a noticeable shift in commercial priorities around infrastructure and IP. So with that backdrop, today we're looking at the tech trends we think will really shape 2026.
(:Now, if 2025 was the year of regulatory change, 2026 is about what actually lands in practice. So we're expecting to see regulatory regimes finalized and a movement towards implementation across contracts, compliance, infrastructure, product, and governance. And we're also going to see how some of the newer legislation is or isn't enforced in practice. So whether you're a platform scaling across the EU, a financial sponsor conducting diligence on an AI system or a creative business managing brand risk, we think the following are the six key trends that are going to need to be on your legal and compliance radar in 2026.
(:So Jonathan, what are we going to focus on today?
Jonathan Stephenson (:Yeah, it's 2026. And so as Deb says, we've identified six key trends. First, the growing operational weight of overlapping regulation. Second trend, the role of IP litigation as a stopgap for unresolved AI risks. Number three, infrastructure as both a constraint and a differentiator. The fourth trend being governance as a trust layer and the fifth trend, leveraging data as a licensable asset. And finally, number six, mature governance built on existing structures rather than reinventing the wheel. And yes, last year we had five trends and you can see exactly where this is going.
Deborah Kirk (:So 2027, presumably that'll be-
Jonathan Stephenson (:Seven trends.
Deborah Kirk (:Yeah.
Alistair Ho (:Okay. So what's trend one then? So the regulatory stack is both growing and some would say colliding as well. So the real pressure point this year for organizations isn't about implementing a single piece of legislation. It's about the combined impacts of multiple overlapping regimes. So legal compliance, data, and product teams all now contending with competing obligations essentially. So some that require transparency and disclosure, others that require restriction or protection, all of them apply to overlapping systems as well.
Jonathan Stephenson (:Yeah, exactly. I think let's really break that down into an example. Organizations deploying AI-driven connected devices have to navigate a patchwork of overlapping regulations, a single product. They need to comply with the Data Act's data sharing rules, the GDPRs, data protection requirements, the AI Act, Article 53, transparency and documentation obligations, and the Cyber Resilience Act mandate for security by design across the lifecycle, all those digital projects. All of this happening all at once. They're going to sector specific laws like the OSA and the DSA, they may also apply, particularly where AI is used to generate or moderate user content.
(:If these regimes apply, the same organization could also be subject to mandated age gating, illegal content, filtering, and even senior manager accountability in some cases. There are very few, if any, carve outs and compliance with one regime does not exempt an organization from the others. This makes proactive governance and integrated compliance strategies really, really essential.
Deborah Kirk (:Yeah, that's right. And the point is none of these laws lives in a vacuum and a business may need to demonstrate lawful data disclosure under the Data Act while simultaneously ensuring that no sensitive data escapes under the Cyber Resilience Act, or it may be applying the DUAA and the AI Act to the same system, one focused on human oversight and the other on technical transparency. And I guess the challenge for compliance teams is navigating and harmonizing overlapping legal and regulatory obligations, particularly where product design or third party dependencies don't map cleanly onto or into legal silos.
Alistair Ho (:Yeah, exactly. So when we're saying it's a legal challenge of overlapping legislation, it's also really an operational challenge then. So we're seeing businesses forced to align DPIAs, vendor disclosures, procurement terms, internal governments, all of these aspects across regimes that arguably were never designed and never thought to interlock in this way previously. And I think that's something that the digital omnibus regulation proposal, which was published in November 2025, that's really the commission's attempt to rationalize these requirements, so reporting requirements, certification burdens across the e-digital rulebook, and looking at those overlaps across the AI Act, Data Act, GPR, CRA to see how they can rationalize those obligations.
Jonathan Stephenson (:Yeah. And what's interesting is that proposal really acknowledges that existing fragmentation, a theme that's been politically visible since Mario Draghi's 2024 competitiveness report, and really continues to reflect the lived experience of many businesses trying to deploy AI at scale or even a tool. While it promises simplification, what we've seen is not really any real show in incoming digital regulation from the EU of that. And with upcoming Digital Fairness Act, for example, continuing to signal additional detail in the existing regulatory landscape.
Deborah Kirk (:Absolutely right. Certainly no slowdown. Any useful framing came from the techUK Vision to Value Conference, which we attended where a speaker described the AI sandwich. So foundation models at the bottom, enterprise integration in the middle and user interfaces at the top. So the middle layer, infrastructure, compliance and integration is really where these regulatory collisions are now most acute. And even with growing regulation at the user facing layer, the enterprise core remains the most exposed operationally and from a regulatory perspective.
Alistair Ho (:So I guess that's means structural diversions to the trend essentially. So response isn't more checklists, it's more cross-functional governance that treats these legal obligations, not just as legal obligations, but as product constraints or requirements. So building it into the design and procurement even from the outset.
Deborah Kirk (:That's right. And for anyone looking for practical insights and strategies to achieve this with a focus on navigating the latest UK and EU tech regulations, platform governance and cross-border data challenges, a little plug for March 11, 2026, protecting your business tech regulation from policy to practice conference, which is being held at our London office here in Skadden. So please do join us. Plug over. So listen, trend too, let's move on. IP enforcement is certainly filling the regulatory gaps. We have a lot of unanswered questions from an IP point of view where it comes to AI and where regulation really hasn't been able to keep pace with generative AI, as with so many areas of tech. We're seeing rights holders and litigants turn to existing legal frameworks to fill the gap. Ali, give us an update.
Alistair Ho (:Yes – so we are starting to see a variety of examples of rightsholders are trying to apply existing concepts such as trademarks and copyright to AI in creative ways. For example, a few celebrities have trademarked pictures and even short of clips of themselves to fight against AI clones and there are ongoing high-profile cases relating to copyright infringement in the context of models trained on protected works. One interesting question that those cases have given rise to is if the model itself does not directly store or reproduce the copyright training images, but the outputs look very similar then, given the output statistically derived rather than directly copied does that output infringe copyright itself.
Jonathan Stephenson (:
That's exactly right. And at a policy level, even we're seeing similar trends that the UK IPOs consultation on AI and copyright received over 11,000 responses, more than 88% of respondents supporting a statutory compulsory licensing regime for training data and effectively really treating data sets in the same way that we've seen sample music treated as well. We're really expecting a government report on the consultation the first half of 2026, we'll see. And it's likely that this will shape the future of licensing frameworks in the UK at least.
Alistair Ho (:And so that's sort of the testing of existing legal frameworks. There's also other legal strategies emerging from rights holders looking to take this into their own hands and see what they can do to protect their own rights. So in late 2025, Matthew McConaughey registered eight US trademarks, not just for famous catchphrases like, I'm going to butcher these, "Alright, alright, alright." But also-
Deborah Kirk (:Amazing impression, Ali.
Alistair Ho (:Thank you. Thank you. Probably concerning. But also for video marks, capturing his images, cadence and posture. So the two video marks are him standing on a porch and also standing in front of a Christmas tree. And those are filed underneath classes nine and 41 and essentially designed to act deterrent against AI impersonation. So using trademark rights alongside other traditional protections such as copyright, passing off and personality rights in the US.
Deborah Kirk (:Yeah. And we've also seen real commercial impact. Clients are asking for synthetic training exclusions and likeness carve outs, especially in entertainment and voice contracts. And these aren't always enforceable, but they're increasingly expected or at least explored. And the absence of such clauses is now raising diligence questions.
Jonathan Stephenson (:And we're also seeing the same trend of rights enforcement outside the IP context. We've seen in the US, for example, an HR-related class action whereby there's accusation that the company on an unlawful basis has used its AI-powered recruitment tools, not relying on AI law, but traditional employment discrimination statutes is really the driver behind that. It's another example of using existing legal doctrines to test emerging technologies.
Alistair Ho (:So pointing that down then, the trend is really tactical enforcement. So a mix of litigation, licensing, right to such and strategies, all being used to fill sort of deductional gaps with creative litigation and enforcement. So until statutory frameworks catch up, I think these stopgap will continue to shape how companies manage their legal AI risk.
Jonathan Stephenson (:Yeah. Moving on to trend three, a favorite of mine, and that's really around infrastructure shaping legal outcomes. It's an infrastructure may not be regulation, but in 2026, it's really become a key driver if it wasn't already of legal exposure and delivery risk from chip supply to grid access, physical infrastructure on the whole now really determines where AI systems are developed, deployed, and sold.
Deborah Kirk (:That's right. And again, to mention the techUK Vision to Value Conference, one speaker put it pretty starkly. In essence, your model might be compliant, but your data center might not be. And the cost of land and energy, planning delays, regional power constraints are all shaping where and how models are hosted. We've seen clients shift GPU intensive workloads from the UK to islands, the Nordics, or the US simply because of latency and capacity constraints.
Alistair Ho (:Those constraints aren't abstract for legal teams, right? So infrastructure is now a contract variable. Clients are asking for emissions data, water usage disclosures, and those only just things like fallbacks rights if host infrastructure fails or regulatory requirements shift. So one example we saw is some agreements now include regional retraining rights, allowing customers to retrain models in alternative allocations if performance drops or sovereignty risks rise.
Jonathan Stephenson (:And really globally, this is reflected in a range of high profile strategic partnerships and large scale infrastructure investments aimed at really advancing those AI capabilities. In the UK, we've seen government initiatives such as the Isambard's supercomputer and the sovereign AI fund show political intent, but grid and planning constraints really do continue to remain significant.
Deborah Kirk (:That's right. And one workaround is the AI growth zone initiative, a kind of regulatory and infrastructure sandbox. But scale remains the issue. As one industry leader put it, the UK can innovate, it just can't build fast enough. We're seeing investment flow to jurisdictions that offer both capability and speed. So what's the takeaway, Ali?
Alistair Ho (:Takeaway here. Infrastructure is not just a tech decision, it's also a legal decision. So a legal risk allocation problem, if anything, touching contracts, performance guarantees, jurisdictional coverage, and of course, long-term liability. So what's trend for good governance as a trust layer? So governance has moved from theory to practice in 2026. It's about knowing where training data came from and how it's governed, and that's just as a licensing baseline.
Jonathan Stephenson (:Yeah. And to turn slightly legal for a second, under Article 53 of the AI Act, general purpose, AI providers are GPAI, to use the term of art must publish summaries of their training data, disclose no limitations, provide instructions for safe use. And that obligation applies to models released after the 2nd of August 2025 last year, but enforcement for systemic risk models won't begin until later this year in 2026. In the meantime, parties are really settling their own thresholds for what counts as an acceptable disclosure.
Deborah Kirk (:There's also an enforcement layer. So in January, the European Commission opened a DSA investigation into a major platform following concerns about AI-generated explicit content. And separately, a well-known chatbot tool blocked under 18s from open chat and launched an AI safety lab after facing similar pressure. So provenance is now both an IP and a safety issue and commercial counterparties are watching very closely.
Alistair Ho (:And it's played out in diligence in M&A. So one UK publisher reportedly walked away from a model licensing deal because the supply can evidence filtering tools or backlist logs. So the GC's message there was clear. If you can't show me the logs, use our content potentially.
Jonathan Stephenson (:Right. And we've seen that similar approach from the Dutch DPA who's also stepped in late 2025. It published its research showing that several major chatbots provided politically biased advice ahead of election, always a geopolitical layer to all of this stuff. And those findings brought provenance, governance, and prompt filtering into the democratic governance conversation. They're not just our usual things that we might consider from an IP or platform safety perspective, but just wider impacts.
Deborah Kirk (:That's right. So the trend is provenance's contract infrastructure. We're seeing dataset registries, input, blacklists, opt out logic, filtering tools become core parts of supplier obligations, particularly where clients based regulatory or reputational risk for model misuse. Okay, trend five, structured data is the new IP. So listen, as you all know, data has always been valuable and of late, that value has really been understood. But in 2026, the way businesses structure, license, and contract for data has started to resemble IP management. So this trend is no longer theoretical.
Jonathan Stephenson (:Exactly right, Deb. And what we've seen in AI transactions that we've been involved in is that we're seeing contributors and buyers treating, training data, bind-tuning data, synthetic variance and model outputs as each has really distinct assets, each with their own licensing terms, their restraining rules and jurisdictional carve-outs. And one recent deal, a fallback clause that we had seen allowed a contributor to clawback enriched embeddings if a model was retrained outside of the agreed region.
Alistair Ho (:And just to push that, this logic doesn't just stop there, right? So we're seeing telemetry, clickstream, zero-party data. They're all being licensed in these layered models. They all have their own restrictions, field of use limits, no training clauses, and tiered access based on data enrichment. So these resemble software patent licensing structures, and the value change is becoming ever more complex as a result.
Deborah Kirk (:And it's not just about new data either. As AI systems get layered onto legacy platforms, data once governed by a single privacy policy or customer agreement is now being split across internal use, training, third party resale and inference access. So each pathway may carry different legal risk and different commercial value.
Jonathan Stephenson (:So the trend here really is codified data value. Legal teams, business teams, they're moving away from, is the data protected to how is this data monetized, licensed, and governed? And that shift really demands new thinking around IP clauses, ownership presumptions, as Ali earlier mentioned, diligence priorities, especially in cross-border deals or long-term model arrangements that we all are part of increasingly.
Deborah Kirk (:Trend six.
Jonathan Stephenson (:Trend six.
Deborah Kirk (:Last but not least.
Jonathan Stephenson (:Governance is getting embedded. And this final trend really gets to the point of governance maturity. We've moved from drafting AI policies, looking at the triangle that we all see all the time. So really operationalizing this topic, embedding those legal checkpoints in product procurement and vendor workflows.
Deborah Kirk (:And again, to mention the techUK conference, not plugging it was just very useful. One speaker said governance is a workflow, not a white paper, and that line really stuck with us. Legal review is happening earlier now. Risk triggers are being built into product specs. So we really are seeing that shift from policy to architecture, right, Ali?
Alistair Ho (:Yeah. And Ofcom's regulatory model is a good example of that trend. It's targeting supervision first and then reserving enforcement for companies that refuse to engage. So it applies internal economics models to assess that risk, works directly with the platforms to co-design improvements, and that tone is influencing how businesses structure their own governance, particularly in regulated sectors.
Jonathan Stephenson (:Yeah. And there's also regulatory expansion. As we earlier mentioned, the draft Digital Fairness Act would apply transparency and fairness duties to any digital system, including recommendation engines, pricing tools, and non-AI decision system. It's not really just about frontier AI anymore. Governance now covers any system which could have a material effect on users.
Deborah Kirk (:Yep. We've seen clients update their DPIAs to include AI functionality flags. Others are expanding ISO-based model governance across procurement and supplier onboarding. Some are creating internal toolkits for triggering legal review when AI functionality is layered onto legacy systems.
Alistair Ho (:Yeah. So then it's essentially embedded governance. It's not about standing up new frameworks. It's about making existing processes AI aware, regulator ready, and legally defensible at scale.
Jonathan Stephenson (:Another plug, if you're looking to gain insights into AI governance, we are hosting an AI conference on Wednesday, the 11th of February, One Hundred Shoreditch Hotel, and it's really on AI on the horizon, aligning regulation and innovation. And you'll see the details to sign up to that in the description.
Deborah Kirk (:Yeah. So two great events coming up. So those are our six trends, regulatory divergence through accumulation, IP enforcement, filling legal gaps, infrastructure shaping deployment, provenance as a trust and gating mechanism, data structured like IP and governance moving from policy to process. And if there's one unifying message here, it's that 2026 is not about invention, it's about integration. The clients we see moving fastest are the ones embedding legal thinking into how their products are built, how contracts are structured, and how systems are governed.
(:And if you'd like to hear how we're helping clients apply these trends to live deals, governance frameworks and internal policies, please do join us on those two we hope fantastic events. And finally, thank you again for listening to SkadBytes. We'll see you next time.
Voiceover (:Thank you for joining us for today's episode of SkadBytes. If you like what you're hearing, be sure to subscribe in your favorite podcast app so you don't miss any future conversations. Additional information about Skadden can be found at skadden.com.