AI and Copyright (Part 1): Legal Controversies and the Boundaries of Authorisation
- Anna Lau
- Apr 28
- 6 min read
The rapid development of AI-generated content (AIGC) technology is sparking a deep debate about intellectual property in global creative industries. From music and literature to the visual arts, generative AI models' ability to learn creative patterns by analysing data centres worth of human-created content is challenging traditional copyright law boundaries.
Author: Anna Lau, Partner and Ashley Zhu, Paralegal
The pace at which generative AI has advanced in recent times has created a significant gap – not just a lag – between the current legal frameworks for protecting human copyrights and the new legal challenges that it brings. Existing copyright laws originate from the eras before the internet boom and the rise of AI; a time when lawmakers nor the judiciary could not have anticipated that AI would be able to “learn” by analysing vast amounts of human-made creative work.
The pace and balance of legal development vary across jurisdictions, leading to a regulatory fragmentation. As highlighted in the UK’s “Copyright and AI” consultation paper, such legal uncertainty has prompted some major AI developers to avoid training their models in the UK and shift to jurisdictions with clearer or more relaxed rules instead. In Hong Kong, there is still no dedicated statute governing the use of generative AI models. Although other jurisdictions have seen legal teams advance arguments and legislations that attempt to navigate this issue, such as a series of lawsuits in the United States — including Thomson Reuters v. Ross Intelligence and The New York Times v. OpenAI and Microsoft where the Courts are testing whether the traditional “fair use” doctrine can still be applied — criticism regarding insufficient protection of authors remains.
This article explores several key legal issues surrounding AIGC, in two parts. In part 1, this article, it examines the legal controversies arising from the unauthorised use of artistic works, with a particular focus on interpretations of the “fair use” exemption. In part 2 of this article, to be published next month, it argues that intellectual property protection and AI innovation are not inherently incompatible. Last but not least, it considers emerging economic models between tech companies and creators, and ultimately proposes viable paths toward a sustainable and collaborative relationship between AI and human creativity.
The Intense Debate Over "Fair Use"
Ongoing disputes between generative AI developers and artists or media companies over “fair use” exemptions highlight a fundamental tension between innovation and protection. At the heart of the debate lies the interpretation of the fair use (in US law) or fair dealing (in English/Commonwealth law) doctrine. AI developers argue that training models using copyrighted content is a transformative use, analogous to human learning, and does not involve reproducing the original or making it publicly available. Therefore, they believe it should qualify as fair use. On the other hand, creators argue that ingesting entire works, then generating outputs that imitate them, goes far beyond the original intent of fair use, and amounts to unlicensed commercial exploitation.
Ongoing disputes between generative AI developers and artists or media companies over 'fair use' exemptions highlight a fundamental tension between innovation and protection
In the United States and Japan, this disagreement has led to waves of lawsuits against tech giants, raising the broader question of whether the intellectual property rights of creative industries are being systematically undermined.
Under Hong Kong law, it is not always unlawful to use an artist’s work without their consent. There are statutory exceptions under the Copyright Ordinance (Cap. 528), such as fair use for research, private study, criticism, review, news reporting, or legal proceedings, provided that the use is fair and does not harm the market for the original work. These exceptions are limited in scope and generally do not apply to broad, commercial use.
Whether AI training falls within such exceptions remains highly controversial. It would be premature to definitively exclude it, yet large-scale, unauthorised copying of creative works for profit-driven model training raises serious legal and ethical questions. On the one hand, musicians’ concern that generative AI can “learn” their style, compositions, and identity and potentially undermine their livelihoods is understandable. On the other hand, these concerns can mainly be attributed to the gap between our legal framework and the rapid evolution of AI tools. Consider this - just two years ago, users needed to study Midjourney guides and prompts, whereas now anyone can generate artwork through ChatGPT-4o’s image module with minimal instruction. But we rarely see that certain laws can evolve within such a short timeframe.
The central legal issue now is whether AI training genuinely qualifies as “transformative use”. While “transformative use” is not an independent legal defence, it is a key factor in determining fair use. If, as some developers argue, the model is simply learning abstract patterns such as pixel relations or word sequences, such use may not constitute copyright infringing. However, emerging techniques that extract details from databases for training, that can memorise and reproduce near-identical content from their training sets, may undermine the transformative argument.
Whether AI training genuinely qualifies as 'transformative use' remains highly controversial
If a similar argument is submitted to the Hong Kong Courts, the “fair dealing” defence may not be applicable. Pursuant to the current Copyright Ordinance (Cap. 528), the exceptions are limited and do not explicitly cover AI training or other novel AI techniques. Also, it is not possible to refer the Hong Kong Courts to case precedents in other jurisdictions where the legal stances are diverging: the US leans on case law, the EU adopts a rights-holder-centred stance, Singapore is promoting multi-stakeholder dialogue, and the UK is considering an “opt-out” model. It seems that the laws on this topic are still under debate and we will only be able to confirm the legal position after a certain model has been analysed and upheld by the upper Courts (i.e. the High Court and the Court of Appeal).
A Discussion About “Authorisation”: the UK’s Opt-Out Proposal
Since assessing infringement ex post facto is often legally and technically challenging, one alternative is to legislate authorisation requirements upfront — to reduce disputes before they occur. In this context, the UK government's proposal to allow AI firms to train models on copyrighted material unless the copyright holders explicitly “opt out” has been presented by some as a middle path: a compromise between safeguarding creators’ rights and promoting AI development. However, this opt-out mechanism operates on the premise of implied consent, which means if the author does not object, AI companies may freely use the work.
In contrast, others argue for an opt-in model where the creator’s explicit consent is required before any use, paying more respect to the author’s rights. The difference between “opt-in” and “opt-out” lies in the bearer of the legal burden — whether the onus will be on authors to oppose or AI developers to seek permission. The opt-out model shifts the duty to the authors which disadvantages those who are unaware that their work has been used by AI firms.
In this aspect, we must bear in mind that generally speaking, copyright arises automatically once an original work is fixed in a tangible form. The creator need not take additional steps (e.g. registration) to obtain copyright protection. In such cases, they may be unaware that their work has been used by AI firms until the AI models utilise the copyrighted work to produce similar work which is then brought to the author’s attention. The authors effectively lose the right to object if they are unaware of such use and submit the objection promptly. This may also waste disproportionate time and costs to AI developers as it may not be technically feasible to “remove” the work from the AI model which has already been trained by the work. Therefore, the opt-in model appears to be a fair way to distribute the duties and risks between the author and AI developers.
These legal controversies show that traditional copyright frameworks are ill-equipped to address the scale and complexity of generative AI
As disputes continue to unfold, the next question is whether stricter IP enforcement hinders innovation — or whether there is a more balanced path forward.
More about that in Part 2 in next month’s newsletter.
Disclaimer: This publication is general in nature and is not intended to constitute legal advice. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.
For specific advice about your situation, please contact:

Partner
+852 2388 3899
Comentários