As generative AI continues to scale, 2026 is shaping up to be a pivotal year for how this technology is governed through the lens of intellectual property law. While legislative frameworks such as the EU AI Act will begin to take effect, the most consequential developments for creators, rights-holders and businesses deploying AI are likely to emerge outside of regulation.
The real battleground will be transparency around training data and IP use, and the most influential decisions will increasingly come from courts rather than lawmakers.
For businesses, artists, publishers, musicians and brand owners, the coming year will be defined less by new rules on paper and more by how existing IP laws are interpreted, enforced and tested in practice.
Transparency becomes contested rather than resolved
Transparency is often presented as a solution to many issues related to the use of AI tools, particularly regarding copyright and training data. In 2026, it is more likely to become one of the most contested areas of AI regulation from an IP perspective.
As the EU AI Act’s disclosure regulations begin to apply, AI developers will be required to provide more information about the datasets used to train their models. However, these disclosures are expected to remain high-level, focusing on broad data categories rather than identifying specific works. From a regulatory standpoint, this represents progress. For creators and rights-holders, it will likely be seen as insufficient.
The main concern is not simply whether a work may have been used, but whether it can be traced, attributed and controlled. High-level transparency does little to answer questions about provenance, consent or compensation for rights holders. Without visibility into which works were used and on what basis, enforcement remains difficult even where IP rights technically exist.
This gap between transparency and accountability is likely to widen this year. As businesses deploy AI systems more widely, they will face tougher questions not only from regulators, but also from clients, partners and the public about data ethics and IP governance. Transparency will no longer be treated as a narrow compliance exercise, but as a reputational issue and a legal risk.
This means pressure will continue to build for stricter legal or industry standards that go beyond broad disclosures. Creators are increasingly calling for meaningful opt-out mechanisms, attribution requirements and licensing models that offer real control, rather than relying solely on after-the-fact litigation.
Litigation expands beyond images and into brands, music and text
This year we will see a marked increase in litigation testing how existing IP laws apply to AI. Many of last year’s legal battles focused heavily on image generation, but that focus is widening. Music, literature, branding and trademarks are becoming key flashpoints as rights-holders seek to understand how protected works and brand assets can be used in AI training and content generation.
These disputes highlight the difficulty of applying traditional copyright and trademark frameworks to global AI systems trained on vast volumes of material that generate outputs through statistical prediction rather than direct copying. Questions such as whether training itself constitutes infringement, how national or territorial copyright applies to globally trained models, and whether AI outputs can infringe trademarks or personality rights are now firmly before the courts.
Recent judgments have reflected this complexity. Courts have shown reluctance when it comes to treating an AI model’s internal weights as infringing material in themselves. However, they still recognise that outputs can violate trademark or other IP rights where distinctive elements are reproduced. For rights-holders, claims will increasingly hinge on demonstrating a clear link between protected works, training practices and observable model behaviour, as well as establishing where training occurred and which laws apply.
For AI developers, these cases underline the importance of proactive dataset governance. Documenting sources, implementing filtering mechanisms and excluding protected material where possible are no longer optional risk-management steps.
Case law fills the gaps that legislation cannot
One of the defining features of 2026 will be the extent to which case law, rather than statutes, shapes AI governance in intellectual property. Legislative processes are slow, and even comprehensive frameworks struggle to keep pace with the rapid evolution of generative AI. Courts, by contrast, are responding to real disputes between creators, rights-holders and AI developers, in real time.
Judicial decisions will increasingly provide the most practical guidance on how AI systems should be built and deployed within existing IP boundaries. Businesses are therefore likely to look to emerging judgments for clarity on acceptable practices, rather than waiting for further regulatory reform. This is particularly evident in copyright, where regulation developed for human creation, identifiable copies and territorial exploitation are being stretched by global AI training and automated generation.
At the same time, outcomes are unlikely to be consistent. Different jurisdictions are prioritising different factors, from consent and unauthorised use, to outputs and market impact, creating uncertainty for all parties.
Jurisdictional fragmentation and enforcement challenges
As AI systems are developed and deployed globally, separating where a model is trained from where its outputs are consumed has become a critical enforcement challenge for rights-holders. Companies can train models in jurisdictions with more permissive IP rules while offering services around the world.
In 2026, this form of ‘forum shopping’ is likely to remain a feature of AI litigation. Until there is more unified international alignment, creators and brands will continue to face the burden of protecting their rights across multiple legal systems, often at great cost.
This enforcement gap is as important as the legal rules themselves. Many disputes never reach court, not because rights don’t exist, but because the cost, complexity and evidential hurdles are too high. When misuse is identified too late, practical solutions may be limited. Without more effective monitoring tools and clearer opt-out mechanisms, confidence in the IP system risks being undermined.
Preparing for a more contested AI and IP landscape
Looking ahead, transparency alone will not resolve the tensions between AI development and intellectual property protection. High-level disclosures are a starting point, not an end point. Likewise, legislation will set the framework, but courts will increasingly define how IP rules are applied in practice.
For businesses deploying AI, this means paying close attention to emerging case law, not just regulatory announcements. How data is sourced, documented and governed will be scrutinised not only in courtrooms, but also in commercial relationships.
For creators and rights-holders, litigation remains one key tool, but it sits alongside licensing strategies, contractual clarity and participation in industry standards. Explicitly defining whether and how works can be used for AI training, generation, or commercial deployment is becoming a necessity. At the same time, AI technology can create opportunities for creators, and we’re starting to see the first partnerships emerge between AI and creative giants.
Ultimately, 2026 is unlikely to deliver clear answers regarding AI-related IP enforcement. Instead, it will mark a shift towards a more contested, case-driven phase of AI and intellectual property governance, where transparency, accountability and legal interpretation collide.