

While the applicability of Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“IT Rules 2026”) range across wide platforms, it has hit the social media platforms with a much greater force and sharp obligations solely due to the nature (personal) and manner of data processing (from hosts to amplification systems).
The intermediary liability framework can be traced back to Section 79, IT Act 2000, that essentially safeguarded the intermediary platforms from third party content, to the extent there was no creative or editorial role of the platform. This provision was further operationalised with the IT (intermediary Guidelines) Rules, 2011, which imposed take down within 36 hours provision, privacy policy, terms of use etc.
Introducing tiered compliance architecture, IT Rules 2021 were notified, wherein additional obligations for SSMIs were imposed. It also introduced the controversial traceability obligation requiring SSMIs to identify the "first originator" of the content upon a government order, which was challenged before multiple High Courts. It also introduced new rules for OTT platforms and digital new houses under a three-tier regulatory structure.
There were subsequent amendments in 2023, 2024, and 2025, which merely set stage for the 2026 rules in terms of the scope of prohibited content or the grievance redressal mechanism.
The gap in the previous rules or amendments was foundational, one that presumed that harm on digital platforms was originating by humans act/ prompt, distributed by platforms as passive hosts, and could be remedied by takedown notices. What was left unaddressed was the liability a platform faces when it enables the generation of harmful content, as opposed to merely reacting to it by taking it down. The significant social media intermediaries (SSMIs) earlier acted as hosts with a surrounding abstract intermediary liability. Post the IT rules, 2026, sharp obligations on SSMIs hound the with respect to upload, reposts, filters, AI features, formats, algorithmic recommendation and monetisation. The aim of these rules are largely to tighten the due diligence around AI-generated synthetic content that is deceptively similar to real images. To enable this, faster take down norms, labelling of synthetic content, automated detection tools, etc.
The transition from a reactive complaint model to a proactive governance by design model alters the risk allocation between platforms, creators, enterprises and the State. For instance, for a social media app, if a user uploads a face-swapped reel, cloned voice clip, AI-generated political speech, or synthetically altered “candid” video, the platform cannot remain satisfied with a passive notice-and-takedown mindset. It is now expected to seek declarations whether the content is synthetic, verify that declaration through appropriate technical measures, and attach visible labelling where needed. That means liability pressure moves upstream, from post-publication complaint handling to pre-publication design.
Besides the upload flow legal chokepoint, the exercise of recommendation and boosting basis algorithm is also under legal scrutiny by means of these rules. It is rather trite that social media apps are not merely platforms storing personal and commercial information, but also rely on recommending and boosting content. For social apps, recommendation engines are the business model. The more a platform curates reach, the harder it becomes to claim passivity in relation to content that its systems helped scale. The scrutiny of the rules do not end here. It also goes to the extent of monitoring creator AI tools such as beautification tool, face swapping, face distortion etc.
The 2026 Rules do not identify AI model itself as a regulated category. A model developer that licenses its technology to third parties without operating a consumer-facing platform may fall entirely outside the regulatory perimeter. This is a structural lacuna that the 2026 framework inherits from the IT Act itself, which was never designed to regulate software at the model layer.
Another aspect that was left unaddressed in the pre-2026 rules was the traceability issue. The question of whether liability for AI-generated harm attaches to the intermediary platform, the underlying AI model, or the prompt engineer remains legally unresolved. Yet it is a tension that the 2026 Rules, for the first time, consciously confront rather than ignore. The 2026 Rules gesture toward accountability through mandatory metadata embedding and unique identifiers but the question of origin attribution in AI-generated content remains legally unresolved.
With this apparent lacuna and deflection of liability towards a known identifiable evil, what does come with this proactive intermediary compliance model is the usual jitters of the industry around excessive over-removal, unreliable synthetic-content detection, compelled labelling, higher operational costs, and a growing argument over whether this begins to dilute the traditional safe-harbour posture by pushing intermediaries toward quasi-adjudicatory content decisions.
These standards seem more onerous when compared to other jurisdictions. The practicality of the enforcement is something that we will have to wait and watch. Additionally, the question of whether a foundation model developer bears any liability for the downstream harms generated through its technology in the absence of any consumer-facing relationship with the end user, is one that the current framework and agencies are wholly unprepared to answer.
About the author: Mitakshara Goyal is the Founder of Svarniti Law Offices.
Disclaimer: The opinions expressed in this article are those of the author(s). The opinions presented do not necessarily reflect the views of Bar & Bench.
If you would like your Deals, Columns, Press Releases to be published on Bar & Bench, please fill in the form available here.