On December 8, 2025, the Department for Promotion of Industry and Internal Trade (DPIIT) published its Working Paper on Generative AI and Copyright, marking India’s first serious attempt to construct a regulatory framework for AI training on copyrighted works.
While the paper seeks to balance innovation with creator interests through a proposed hybrid licensing model, it leaves several foundational legal and policy questions insufficiently addressed. This article examines the gaps in proposed hybrid model with recommendations to address the same.
The hybrid model proposed in the working paper raises concerns about the exclusive rights granted to copyright owners under Section 14 of the Copyright Act, 1957. The requisite balance of public access and created content as an intent to statutory licensing and compulsory licensing is inapplicable in case of granting licenses to AI companies, which operate for commercial gain. This approach departs from the rationale underlying statutory and compulsory licensing regimes, which are traditionally justified on limited public interest grounds.
AI systems do not own the content that they train on; they merely mine data from available sources all over the internet for the same. The owners may exploit the works themselves or license others to exploit them for a consideration which may be in the form of royalty or a lumpsum payment. Since neither the AI platforms nor the proposed hybrid model follow this, they are in violation of the rights of the copyright owner.
A substantive lacuna exists in the current Indian copyright regime concerning royalty entitlements across work categories. Section 18 of the Copyright Act establishes royalty distribution mechanisms exclusively for authors of literary and musical works incorporated within cinematograph films and sound recordings, as well as for publishers of such incorporated works.
There exists no regime for royalty allocation to artistic works including photographs, portraits, paintings and to authors of literary works other than cinematograph films and sound recordings, which constitute most of AI training date and output.
The proviso to Section 18 cannot be construed as a right in itself, as the legislative intent appears to merely replicate an existing framework rather than create a standalone entitlement. This approach is inherently flawed, particularly because the Act does not define “royalty,” thereby rendering any supposed entitlement vague. In the absence of a clearly defined right in the parent Act, the mere existence of tariff mechanisms cannot give rise to a legally enforceable right to royalty. While royalty is discussed as a component of exploitation, the statute fails to confer a corresponding substantive right to royalty per se.
The proposed regime clearly overlooks the potential of AI apps infringing upon personality rights of famous celebrities. There exist numerous instances of celebrities like Kamal Hassan, Anil Kapoor, Amitabh Bachchan, Salman Khan, Aishwarya Rai and Shilpa Shetty moving court for injunctions against misuse of their personality rights by AI platforms. Lack of any acknowledgement and model to deal with infringement of such personality rights in the proposed model is likely to invite litigation for such infringements.
Further, emerging developments in generative AI reveal a new frontier in personality rights violation: the creation of AI-generated personas trained, in whole or part, on unauthorised samples of real performers’ likenesses, voices and distinctive behavioral characteristics.
Particle6 Studios created Tilly Norwood, an “AI actress”, to star and act in films. Interestingly, Scottish actress Briony Monroe has alleged that her distinctive performance mannerisms, facial characteristics and voice were incorporated into Tilly Norwood’s training data without her consent or compensation. This scenario brings into focus unresolved legal issues, including whether training AI personas on identifiable performers constitutes infringement, what remedies and enforceable rights performers may have, whether existing royalty frameworks recognise personality and performance data used in AI training, how compensation should be assessed and distributed when AI systems learn from human performance attributes, and whether control over performers’ economic and moral rights can lawfully vest in the AI developer or platform rather than the performer under prevailing legal regimes. The regime's failure to integrate safeguards for these AI-induced encroachments not only perpetuates judicial fragmentation, but also erodes performer protections in an era where synthetic replicas threaten livelihoods and dignity worldwide.
At the heart of copyright protection lies a foundational requirement: that a protectable “work” must emanate from an identifiable “author.” Under the existing law, authorship is a statutory threshold that determines both subsistence of copyright and the vesting of ownership. In the absence of authorship, the rights conferred as a result of copyright protection cease to exist. Consequently, any discourse on granting authorship or ownership to AI systems or to purely AI-generated outputs remains legally untenable within the current Indian framework.
The Copyright Act identifies the author with reference to “the person who causes the work to be created.” Until an entity satisfies the statutory meaning of “author,” no claim of ownership may arise. This exposes a critical gap when applied to AI-generated outputs. AI systems do not presently fall within the definition of an “author” under Indian law, nor does the working paper propose any provision that attributes authorship of AI-generated works to developers or deployers. Until authorship itself is statutorily encapsulated in the Act, the question of ownership over such AI-generated content does not meaningfully arise.
In order to align the draft AI Act with the existing copyright regime, it is essential that the concepts of “authorship” and “ownership” be expressly addressed in relation to AI-generated works.
If AI-generated outputs are excluded from the ambit of computer programmes under Section 2(ffc) of the Copyright Act, they cannot be accommodated within the existing statutory scheme governing authorship. In such circumstances, it becomes imperative for the draft to clearly define “authorship” and “ownership” with respect to AI-generated works. In the absence of such clarification, the proposed framework risks creating a regulatory vacuum, undermining both legal certainty and the effective protection of rights.
Ultimately, any future legislative intervention addressing AI and copyright must begin at the threshold: the definition of authorship itself. A new regime that seeks to regulate AI-generated works without first defining AI authorship would give rise to legal uncertainty. In the absence of such clarity, neither AI developers nor deployers can meaningfully assert rights, nor can courts consistently adjudicate disputes arising from AI-generated content.
It is a crucial issue as to who shall be liable for infringement of copyrighted work - the prompt giver or the AI platform? There are three parties who could be held liable for the same:
(i) The author of the computer programme who builds the AI system, as covered under Section 2(d);
(ii) The AI platform itself as a separate entity for not only illegally training on copyrighted material but regurgitating copyrighted content;
(iii) The user who types in a prompt, causing the AI system to generate infringing material.
Certain AI platforms auto-publish the content they generate, in which case the author of the computer programme should be held liable. However, if the user publishes the infringing content, he should be held liable. Either way, storage of copyrighted material is an infringement of a copyright owner’s exclusive right under Section 14(a)(i).
There are also unaddressed jurisdictional issues in the proposed model. Section 1(2) of the Copyright Act, 1957 extends the Act to the whole of India, but the working paper does not grapple with the practical problem of enforcing these rights against AI companies who are not incorporated within the territorial jurisdiction of India. In a situation where training, model hosting and primary decision‑making all occur outside India, it is unclear on what jurisdictional basis Indian courts will exercise authority over such entities, or how orders for disclosure, royalties or injunctive relief will be implemented in practice.
Opt-in licensing model with transparent consent mechanisms
Rather than a blanket mandatory license, the framework should adopt an opt-in mechanism that preserves creator agency while reducing transaction costs. This model aligns with technological consent standards (for example, web cookie consent).
Proposed mechanism:
(i) Copyright registry creation: The proposed Copyright Royalties Collective for AI Training (CRCAT) should maintain a digitised registry of works available for AI training, organised by:
– Work category (literary, artistic, musical, cinematographic, computer-generated programme)
– Jurisdiction of copyright owner
– Copyright holder contact information
(ii) Creator participation: Copyright owners register works they permit for AI training, specifying:
– Permitted use modalities language model training, image synthesis, voice modeling)
– Territorial scope
– Compensation modality (fixed royalty, revenue-share percentage, hybrid)
– Attribution and labeling requirements
(iii) Machine-readable metadata: Implement technical standards (Dublin Core, ONIX, or custom schema) enabling automated compliance verification. AI developers integrate API calls to check the CRCAT registry before ingesting training data.
(iv) Default position: Works not affirmatively registered remain inaccessible for AI training absent individual licensing agreements.
This preserves creator autonomy, incentivises participation through choice and reduces litigation by establishing clear-cut rules. A robust, opt‑in licensing scheme for AI training would also eliminate any question of infringement in respect of licensed uses, since every work incorporated into the training datasets would be used pursuant to express authorisation from the relevant rightsholders.
Adoption of international principles for AI governance
China’s Interim Measures for the Management of Generative Artificial Intelligence Services place the burden squarely on AI providers to ensure that training data comes from lawful sources and does not infringe intellectual property rights. This effectively discourages unlicensed scraping and pushes platforms toward licensing. Where training data involves personal information, explicit consent or another lawful basis is mandatory, which indirectly protects personality and performer rights. In addition, China’s Deep Synthesis regulations require clear labelling of AI-generated or AI-altered content, especially where text, voice, images or videos simulate real persons, thereby addressing risks of deception and impersonation.
The European Union (EU) AI Act adopts a parallel, but distinct framework grounded in transparency and copyright reservations. It mandates machine-readable labelling of AI-generated or manipulated content and disclosure obligations for deepfakes, subject to narrow exceptions for law enforcement and artistic works. On the training side, the EU preserves a text and data mining exception but conditions it on respect for machine-readable copyright opt-outs, requiring AI providers to actively comply with rightsholders’ reservations.
Grievance redressal mechanism to reduce burden on judiciary
The proposed regime has not recommended any detailed grievance redressal mechanism. However, given the scale at which AI systems ingest and exploit copyrighted and personality‑bearing material, a structured, time‑bound framework is essential. A tiered mechanism, broadly modelled on the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, would significantly improve compliance and reduce the burden on courts.
Legislative amendments to the Copyright Act, 1957
It is recommended that the Copyright Act, 1957 be suitably amended to align with the evolving landscape of generative AI, ensuring that legal framework, AI regulation and copyright law function in tandem rather than in isolation. Further, it is proposed that Section 18 of the Copyright Act, 1957 be amended to extend the right to receive royalties to all classes of authors, including those of literary works (other than those incorporated in cinematograph films and sound recordings) and artistic works, recognising the extensive use of such works in the training process of generative AI.
Statutory framework for personality rights protection
Establish a performer consent registry:
(i) Enable performers and public figures to affirmatively record non-consent regarding use of their persona such as face, voice, likeness and other personality attributes in AI training.
(ii) Require AI developers to cross-reference this registry before training on performance-bearing data.
(iii) Provide statutory damages for unauthorized personality use.
Mandatory disclosure and labeling:
(i) AI-generated videos, audio and images featuring identifiable individuals must include metadata indicating artificial generation.
(ii) Platforms must implement disclosure mechanisms visible to end-users before content consumption.
Attribution:
(i) Performers retain the right to be credited when their characteristics are identifiably incorporated in AI training.
(ii) Performers can publicly dissociate from AI-generated content bearing their characteristics.
AI has been a dynamic floodgate to innovation. However, if left unchecked, it can shake the conventional foundation on which copyright law stands. A balanced approach can uplift such evolution without causing much harm to the owners of copyrighted work. In the event such recommendations are taken into consideration, it can bridge the gap between innovation and rights of copyright owners.
Rajesh Kumar is Head of Legal and Akanksha Badika is Legal Manager at Bhansali Productions.
They were assisted by Himanshu MJ and Advit Shrivastav.