IT Rules 2021 
Columns

Twin amendments to India’s IT Rules: A dual threat to free speech and intermediary neutrality

The Sahyog amendment and the draft deepfake Rules are both well-grounded in accountability language, although both reflect a kind of secrecy that is a disguised form of reform.

Shivam Jadaun

The Government of India announced two sets of changes to the digital governance structure on October 22, 2025, each of which represents a radical shift in how India regulates speech on the internet.

The former, which amends Rule 3(1)(d) of the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, revises the due-diligence framework flowing from Section 79(3)(b) and is linked in practice to the government’s Sahyog Portal through which agencies send content-removal requests to intermediaries. The amendment came into force on November 15, 2025.

The second is a draft amendment to the IT Rules, 2021, released as the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, which introduced a new definition of “synthetically generated information”. It added a set of new obligations to those already in place, purportedly in response to the emergence of deepfakes. The draft was initially opened for public consultation until November 6, 2025 and later extended to November 13, 2025.  

Although the two developments are seemingly different, with one focusing on state-led control of content and the other focusing on platform-led adherence, they complement each other to increase the online control architecture. Both refer to the rhetoric of “transparency” and “accountability”, but they tend to further obscure the process of monitoring, censoring and re-removing online speech in India.

The Sahyog amendment: Administrative censorship as an institutionalised process

The Sahyog Portal has existed quietly for some time as an internal government tool whereby authorised officers of the government could direct intermediaries to remove online content that was considered unlawful. So far, the practice was conducted using office memorandums of the Ministry of Electronics and Information Technology (MeitY) and the Ministry of Home Affairs (MHA), without specific legislative support.

As per the Sahyog amendment, Joint Secretary–rank officials and DIG-level police officers are authorised to issue takedown orders to intermediaries. Intermediaries must remove the specified URL within 36 hours, failing which they risk losing safe harbour under Section 79 of the IT Act. Every order must state the precise legal provision and the URL(s) targeted. The government frames this as “streamlining” enforcement by limiting issuance to higher-ranking officials.

This can, at first sight, seem to increase the intervention threshold. Yet, when examined closely, the change poses a serious threat by enlarging the number of authorities that can command content removal. For instance, there are police ranges in Telangana and in Kerala, each headed by a DIG, effectively multiplying the number of censorship authorities from one centralised body (MeitY) to hundreds across India, decentralising content control.

In addition, the Sahyog amendment doesn’t require publication of each government takedown order in the public domain, which means many directions could remain non-public despite internal review mechanisms. The rationale of every ruling, though it may be on the face of it “proportionate” or “reasonable”, is not disclosed, leaving it hidden from public scrutiny. The user whose post is deleted, as well as the general population, is not informed about the rationale of censorship.

In effect, the amendment creates a parallel track that critics argue bypasses Section 69A's stronger safeguards (originator hearing, independent review) with a weaker, largely internal process offering no meaningful notice, hearing or appeal. This runs counter to the basic tenets of natural justice embedded within Article 14 of the Constitution.

Erosion of safeguards and the Shreya Singhal standard

This clandestine system is in open conflict with the Supreme Court's decision in Shreya Singhal v. Union of India (2015), which upheld Section 69A of the IT Act, 2000 (authorising the government to block content). The Court did so only due to the procedural safeguards that the Blocking Rules of 2009 provided.

Those safeguards included:

  • Written order stating the reasons why the take-down is taking place;

  • An opportunity of hearing to the originator or intermediary;

  • A review committee to assess the propriety of the order.

Recently, in X Corp v. Union of India, the Karnataka High Court reinforced the Sahyog Portal as a facilitation mechanism rather than an instrument of censorship, thus emboldening its formalisation. The Court also made no efforts to resolve the issue that Rule 3(1)(d) of the 2021 Rules permits deletion of content with reduced protection compared to Section 69A of the IT Act, as they leave the intersection of the two provisions unsettled. In comparison, the operation of the Sahyog Portal is devoid of all the three safeguards. The monthly review of the orders by the Secretary of the requesting department lacks independent oversight and compromises the fairness of the procedures.

This leads to a system of administrative convenience taking the place of due process, which defeats the constitutional reasoning of enabling content-blocking authority to survive judicial scrutiny in the first place. Moreover, this change turns the censorship regime in India from one that was centralised and legally reviewable into one that is fragmented, opaque and self-validating.

Draft deepfake rules: Normalising overreach?

On the same day, the MeitY released for public consultation the draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, amending the IT Rules, 2021. The proposed rules would mandate that all “AI-generated” or “synthetically generated” content be identified and labelled by intermediaries.

The cause - fighting deepfakes - is admittedly just. The proliferation of hyperrealistic and fake videos which are frequently created with the intent to disseminate misinformation or harass people is a real threat to society. Yet, the wording of the suggested amendment is distressingly broad and liberal.

The draft defines synthetically generated informationvery broadly as any computer-created or altered content that appears authentic, capturing not only harmful deepfakes, but also benign edited photos, AI-generated art and AI-assisted text. Because AI detection tools remain unreliable at scale, such a wide definition increases the risk of mislabelling and over-censorship. The proposed rules further mandate visible or audible labelling on at least 10% of the content, permanent metadata watermarking and a compulsory user declaration at the time of upload.

Existing Rule 3(1)(b) already shields intermediaries from losing safe harbour for voluntary removal, encouraging them to censor more to avoid liability. The draft Rules push this further by requiring user declarations on AI use and platform labelling of detected AI content, effectively forcing proactive monitoring. This amounts to a general surveillance duty, contrary to the principle of “no general monitoring” that informs safe harbour under Section 79 of the IT Act, 2000, as recognised by the Supreme Court in Shreya Singhal.

The Supreme Court had held that intermediaries gain “actual knowledge” only through a court or government order, which is what triggers liability. The draft Rules undermine this standard by effectively imposing “constructive knowledge,” requiring platforms to proactively detect and tag AI-generated content before upload, creating strong incentives for over-censorship.

In contrast, the European Union’s AI Act (2024), specifically Article 50, places disclosure obligations on users, exempts artistic content and preserves safe harbour by avoiding proactive monitoring duties.

Expanding the perimeter of control

Taken together, the Sahyog amendment and the draft deepfake Rules widen the perimeter of speech regulation by enabling the State to secure faster and broader takedowns and by compelling intermediaries to tighten their own content controls.

Their combined impact can be distilled into three interlinked consequences:

  • Opacity in enforcement: Both mechanisms operate without public disclosure, leaving users unaware of how their content is assessed or removed.

  • Erosion of safe harbour: Fear of liability pushes intermediaries to over-remove content, shifting the burden of censorship to users.

  • Administrative overreach without accountability: Takedowns are issued and reviewed internally with no external scrutiny or independent oversight.

The overall effect is a two-tiered system of censorship - one bureaucratic, one algorithmic - operating with minimal transparency and maximum discretion.

Implications on Constitution and policy

Constitutionally, these developments cast deep fundamental concerns under Article 19(1)(a) of the Indian Constitution that entails the right of freedom of speech and expression. Though the State is allowed to undertake reasonable restrictions under Article 19(2), they should be lawful, in proportion and must be accompanied by due process.

In Anuradha Bhasin v. Union of India (2020), the Supreme Court held that any restriction of speech and access should be “published, justified and reviewable”. The Sahyog framework directly goes against this requirement by making takedown orders a secret.

Furthermore, in PUCL v. Union of India (1996), the Court mandated procedural safeguards for telephonic surveillance, with an emphasis on the fact that secret orders without scrutiny allow abuse. By the same logic, censorship carried out non-transparently functions as a form of surveillance of speech.

In addition, compliance issues and innovation are also of concern with these rules. For instance:

  • Smaller intermediaries may lack the tools to detect deepfakes or run AI filters, risking exclusion from compliance.

  • Legitimate edited or “synthetic” material from whistleblowers, journalists and researchers may be wrongly taken down.

  • It lacks a user-notification mechanism which undermines trust and prevents legitimate discourse.

Finally, the amendments put accountability and control together. Real responsibility in content regulation requires transparency, notice and recourse, rather than silent and unilateral action by state or platforms.

A pattern of procedural evasion

The rollout of these amendments reflects a deliberate pattern. On October 22, 2025, the draft deepfake Rules were released with significant public attention, while the Sahyog amendment was quietly notified with minimal visibility. The deepfake consultation window was also unusually short, limiting meaningful participation. This sequencing pushed headlines toward “AI labelling” and “accountability,” allowing the more consequential expansion of state censorship powers to slip under the radar. It exemplifies India’s growing trend of digital regulation by stealth.

The government is effectively pre-empting meaningful consultation by controlling the narrative and timing, bypassing the spirit of the Pre-Legislative Consultation Policy, 2014, which requires placing draft regulatory changes in the public domain for a minimum of 30 days and engaging in transparent stakeholder feedback before such measures are adopted.

Restoring transparency to the digital public sphere

The two amendments considered as a whole are a massive reconfiguration of the Indian online regulatory framework. The Sahyog amendment and the draft deepfake Rules are both well-grounded in accountability language, although both reflect a kind of secrecy that is a disguised form of reform.

They expand discretion without safeguards, impose compliance without consultation and entrench control without transparency. In doing so, they undermine the constitutional foundations that make regulation legitimate.

In a democracy that boasts of open discourse, secrecy is not the solution to online misinformation or illegal speech. It will not occur through hidden dashboards or algorithmic filters, but processes that are transparent, verifiable and challengeable; processes that respect the right of the citizen to know and not the right of the State to remain silent.

Shivam Jadaun is a Delhi-based lawyer and tech consultant specialising in technology policy. 

"Taking court too lightly": Supreme Court raps Centre, States for failing to install CCTVs in police stations

Certificate Course on Contract Drafting & Negotiation by Bettering Results: Register Now!

CCPA fines Reliance JioMart ₹1 lakh for misleading ads to sell uncertified walkie-talkies

Wife not barred from seeking maintenance from husband merely because she has capability to earn: Kerala High Court

Kerala High Court orders action against illegal vehicle modification, filming videos in drivers' cabin

SCROLL FOR NEXT