

The rapid proliferation of synthetic media, ranging from AI-enabled voice cloning to hyper-realistic deepfakes, have complicated the traditional presumptions of authenticity in the digital ecosystem by enabling content that can convincingly mimic real people, events and records. This has raised regulatory concerns, particularly where such content is used to deceive, impersonate, or harm a person’s reputation.
Against this backdrop, the 2026 amendment (“2026 Amendment”) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules (“Intermediary Rules”) marks a targeted intervention. The Intermediary Rules came into effect from February 20, 2026. Rather than regulating artificial intelligence as a technology, the amendment focuses on formulating a regime to govern realistic synthetic content that may have a misleading impact on the users or may distort factual realities.
India’s intermediary liability framework has largely been built around a balance reflected in Section 79 of the Information Technology Act, 2000 (“IT Act”). Section 79 grants safe harbour to intermediaries for third-party content, on the basis that they act as facilitators rather than publishers exercising editorial control. This position was clarified by the Supreme Court in Shreya Singhal v. Union of India, where the apex court held that an “intermediary is required to act only upon receiving actual knowledge through a court order or a government notification." In effect, prior to the 2026 amendment, platforms were not expected to independently determine whether content was unlawful or not.
The Intermediary Rules gave effect to this framework by prescribing due diligence requirements such as user notices, grievance redressal mechanisms, and timelines for takedown. However, the overall design remained reactive.
The 2026 Amendment introduces a more structured approach to regulating synthetic media, with the concept of ‘synthetically generated information’ (“SGI”) forming its core. The 2026 Amendment defines SGI under Rule 2(1)(wa) as audio, visual, or audio-visual content that is artificially created or altered using a computer resource in a way that makes it appear real and capable of being mistaken for an actual person or real-world event. The focus is not on all AI-generated content, but on content that creates a convincing sense of realism. Routine edits, formatting changes, or accessibility-related modifications that do not distort the original meaning are kept outside the scope of this definition.
Beyond defining SGI, the 2026 Amendment puts in place a set of compliance obligations that operate at multiple levels. To begin, intermediaries are now required under Rule 3(1)(c) to periodically inform users, at least once every 3 (three) months, about their obligations and the consequences of non-compliance. In addition, Rule 3(1)(ca) introduces specific warning requirements for platforms that enable the creation or dissemination of SGI. These warnings are not merely procedural; they are intended to make users aware that misuse of such tools can lead to legal consequences, account suspension, or disclosure of identity in appropriate cases.
The key feature is the introduction of a dedicated due diligence framework under Rule 3(3). Intermediaries that facilitate the creation, publication, or dissemination of SGI are required under Rule 3(3)(a) to take reasonable and appropriate technical measures, including automated tools, to prevent the circulation of unlawful synthetic content. The 2026 Amendment rules further identify certain categories of high-risk content under Rule 3(3)(a)(i), such as non-consensual intimate imagery, child sexual abuse material, false electronic records, and deceptive impersonation and provide directions to platforms on the types of content that must be actively restricted. Under Rule 3(3)(a)(ii), where SGI is permissible, intermediaries are required to ensure that it is clearly and prominently labelled as synthetic. They must also embed metadata or similar technical markers, wherever feasible, to establish provenance and traceability. This reflects a broader shift - regulation is no longer limited to removing harmful content, but also extends to making lawful content more transparent.
Additional obligations apply to significant social media intermediaries. Under Rule 4(1A), such platforms must, prior to publication, obtain a declaration from users as to whether the content is SGI and deploy reasonable measures to verify that declaration. This introduces a layer of scrutiny at the upload stage, moving beyond the earlier reliance on post-publication enforcement.
Taken together, the amendment adopts a dual approach, combining restrictions and disclosures depending on the nature of the content. In doing so, it expands the role of intermediaries from passive conduits to more active participants in managing how digital content is created, shared, and understood.
By requiring intermediaries to deploy technical measures under Rule 3(3) and introducing pre-publication verification for significant social media intermediaries under Rule 4(1A), the framework moves closer to an ex-ante model of content governance. In practice, this is likely to increase reliance on automated detection tools, internal classification systems, and structured upload workflows.
For intermediaries, this translates into a significant expansion of compliance responsibilities. It is no longer sufficient for platforms to respond to complaints or government directions. Platforms must now actively identify, label, and in some cases prevent the dissemination of synthetic content. This requires investment in technical infrastructure, as well as continuous monitoring of user activity. The obligation to embed metadata and ensure traceability further adds to this burden, particularly where content is generated at scale. In the absence of clearly defined thresholds, this may lead to defensive moderation practices or in certain cases, platforms may lean towards removal or restriction of content altogether to minimize potential liability exposure.
More importantly, the impact of this amendment is likely to be felt indirectly. Since most content is ultimately published through intermediary platforms, compliance requirements will effectively shape how content is created in the first place. Upload-stage declarations, labelling requirements, and verification checks are likely to become part of standard platform workflows. As a result, content creators may need to classify synthetic content more carefully, maintain basic provenance records, and align with platform-specific requirements even in the absence of a direct statutory obligation.
In that sense, the amendment does not merely regulate intermediaries, it reshapes the broader content ecosystem by pushing compliance expectations down the value chain.
While the 2026 Amendment addresses a clear regulatory gap, it also gives rise to practical and conceptual concerns. One immediate issue lies in the use of broad and open-ended terms such as “realistic,” “indistinguishable,” and “likely to deceive” within the definition of SGI under Rule 2(1)(wa). These terms are central to determining the scope of the framework, yet they introduce a degree of interpretational uncertainty, particularly in borderline cases where content may be satirical, stylised or only partially synthetic. The amendment reflects a structural shift from a notice-and-takedown model to a more preventive, risk-based framework. This shift now raises a broader question: whether platforms are being transformed from neutral conduits into de facto regulators of online speech?
A related concern is also the risk of over-compliance. As intermediaries are required to take proactive measures under Rule 3(3) and face potential liability exposure, there is a risk that platforms may err on the side of caution and restrict content even where it does not clearly fall within prohibited categories. This could have a regressive effect on legitimate forms of expression, especially in areas such as advertising, parody, or creative experimentation, where the use of synthetic media is increasingly common.
This amendment also raises questions about the evolving role of intermediaries. By requiring platforms to deploy automated tools, verify user declarations under Rule 4(1A), and determine whether content falls within prohibited categories, the Rules effectively position intermediaries as primary decision-makers in content governance. This shift towards private enforcement, often without direct judicial oversight, may have implications for consistency, transparency, and user rights.
Going forward, the effectiveness of the framework will depend on how these challenges are addressed. There is a need for greater clarity through regulatory guidance or industry standards, particularly in relation to labelling formats, metadata requirements, and detection benchmarks. Ultimately, the success of the amendment will lie in its ability to balance harm prevention with the preservation of legitimate uses of synthetic media to ensure freedom of speech and expression of individuals.
About the authors: Varun Vaish is a Partner and Kashish Khattar is a Senior Associate at Luthra and Luthra Law Offices India.
Disclaimer: The opinions expressed in this article are those of the author(s). The opinions presented do not necessarily reflect the views of Bar & Bench.
If you would like your Deals, Columns, Press Releases to be published on Bar & Bench, please fill in the form available here.