Do you know what the theme for the 2026 International Women’s Day (IWD) is?
It is “Rights. Justice. Action. For ALL Women and Girls."
That’s where the inspiration for this article comes from. Since AI is now an integral part of everyone’s life, it is imperative that AI should embrace this theme as well. It is not necessary that women must have explicit rights and justice exclusively for them because it will gradually further strengthen the gender divide we are already seeing. More transparency and more accountability in everything we do, whatever the gender, is the subject - if done right at the design stage, it will eventually benefit women.
A classic example of how gender-neutral laws too go a long way in providing rights and justice to women is the recently introduced “Synthetic Content Rules."
On February 20, 2026, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 came into force. Notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, these rules introduce, for the first time, perhaps even globally, a statutory definition of “synthetically generated information” (SGI) and impose specific due diligence obligations on intermediaries that enable the creation, modification, or dissemination of such content.
The rules mandate prominent labelling, permanent metadata embedding, user declarations, and drastically shortened takedown timelines. For platforms designated as Significant Social Media Intermediaries (SSMIs) - that pretty much include most popular platforms like Facebook, Instagram, YouTube, Twitter, WhatsApp, etc. - the obligations are particularly onerous. They must deploy automated technical measures to verify user declarations and ensure that AI-generated content is identified. The message is loud and clear. Don’t endeavour to deploy, just deploy. Best effort isn’t good enough.
The timing may well be coincidental or maybe not. IWD 2026 carries this theme set by UN Women that is unmistakably legal in its framing: “Rights. Justice. Action. For ALL Women and Girls.” The UN has noted that women currently hold only 64 per cent of the legal rights that men hold worldwide. The Commission on the Status of Women’s 70th session (CSW70), running from March 9-19, 2026, will negotiate conclusions on the theme of ensuring access to justice for all women and girls, including by eliminating discriminatory laws and addressing structural barriers.
Against this backdrop, India’s new synthetic content rules deserve scrutiny not merely as a technology regulation exercise but as a critical intervention in women’s safety and rights. The question is whether these rules go far enough and whether they are designed with the gendered reality of synthetic content abuse firmly in view.
The data is stark. Deepfake statistical data between 2023 and 2025 shows that approximately 96 to 98 per cent of deepfake videos circulating online are non-consensual pornography and nearly all of the victims are women. The volume of such content is growing exponentially. The number of deepfake pornographic videos produced in 2023 was reported to be 464 per cent higher than in 2022. Yet, if we were to conduct a survey across the globe, probably a very small percentage of women will report that they have faced personal victimisation from deepfake pornography, a percentage that will clearly understate the true prevalence, given stigma and underreporting.
Closer home, in India, the National Cybercrime Reporting Portal (NCRP) reported a 118.4 per cent rise in online crimes against women between 2020 and 2024 across categories, including online sexual abuse and sexually explicit content. A 2025 report by the RATI Foundation, a Mumbai-based NGO working on gender-based violence and online safety, found that 10 per cent of all calls to its online abuse helpline involved deepfakes or AI-manipulated sexual images.
India’s vulnerability is acute. With 86.3 per cent of households now having internet access, according to the Government’s 2025 Comprehensive Modular Survey, the surface area for this kind of abuse is vast. The technology to superimpose a woman’s face onto explicit content is cheap, widely accessible, and requires no high-level technical sophistication. The resulting harm, however, is deeply personal and profoundly gendered. As one commentator writing in the Sunday Guardian on the new rules observed, the social punishment inflicted by a synthetic intimate image is meted out as if the image were real by colleagues, neighbours, and relatives who encounter it on group chats or social feeds.
The IT Amendment Rules 2026 provide a statutory definition of SGI, focusing on audio-visual media formats and introducing carve-outs for routine editing and educational uses. Intermediaries must ensure SGI is labelled and embedded with permanent metadata. SSMIs have additional obligations to verify user declarations and display synthetic content with appropriate labels. Takedown timelines have been reduced and SGI that includes child or sexual abuse material or non-consensual intimate imagery is prohibited.
From a women’s rights perspective, there is much to welcome. The explicit prohibition on non-consensual intimate synthetic imagery, the mandatory labelling regime, and the compressed takedown timelines are all responsive to the gendered harms of deepfake technology. The shift from a “best-effort” framework to strict compliance for SSMIs, replacing “endeavour to deploy technology-based measures” with “deploy appropriate technical measures”, is a meaningful escalation of platform accountability.
However, several gaps remain. The Internet Freedom Foundation (IFF), in its analysis of the rules, has raised a concern that the definition of SGI remains broad and subjective in some respects, hinging on whether content appears “authentic” to a reasonable person. This fails to create a clear risk-based distinction between different categories of AI use. IFF also flagged that the rules overestimate the reliability of current watermarking, provenance standards, and AI detection tools, noting that empirical research shows such tools perform poorly in real-world and multilingual contexts. For a country as linguistically diverse as India, this is not a theoretical concern.
More fundamentally, the rules focus on intermediary obligations, not on the individual perpetrators who create and weaponize synthetic intimate imagery. This is a very stark distinction from the EU AI Act, besides those discussed in the next paragraph, that places obligations across the AI value chain, including providers, deployers, importers, distributors, etc. The existing criminal law framework - Section 67 of the IT Act (obscene content), Section 353 of the Bharatiya Nyaya Sanhita (public mischief), and Section 356 (defamation) - was not designed for the specific dynamics of AI-generated abuse. Until it does, India will continue to address a 2026 problem with legal tools from an earlier era.
India’s approach contrasts with the EU AI Act, which is risk-based and distinguishes between high-risk and lower-risk AI uses. The EU framework is considering a harmonized taxonomy and a common disclosure icon, while India’s rules require prominent labelling without a standardized format. Despite these differences, India’s rules took effect earlier, addressing the urgent need for protection against synthetic image abuse.
If these rules are to genuinely serve as a women’s rights intervention, several additional steps are necessary.
First, criminalising the creation and distribution of non-consensual synthetic intimate imagery is overdue. The rules address intermediary responsibility, but the individual who downloads a woman’s photograph from social media and runs it through a nudification application to produce explicit synthetic content must face specific, proportionate criminal liability. Australia’s Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 and the United Kingdom’s criminalisation of explicit deepfakes under its Online Safety Act framework both offer models worth studying.
Second, the enforcement capacity of Indian law enforcement agencies needs urgent investment. The three hour takedown window in the new rules is meaningless if victims cannot effectively report synthetic content to platforms, or if law enforcement cannot secure electronic evidence and pursue perpetrators.
Finally, India’s emerging AI governance architecture, including the India AI Governance Guidelines published by MeitY in November 2025 and the proposed Digital India Act, must build gender impact analysis into the regulatory design from the outset instead of having a gender specific law. The seven “Sutras” of the governance guidelines viz Trust, People First, Innovation over Restraint, Fairness and Equity, Accountability, Safety, and Inclusivity already contain the conceptual vocabulary. What is needed is the institutional commitment to translate that vocabulary into enforceable protections that account for the reality that AI-generated harm is not gender-neutral.
The IT Amendment Rules 2026 represent a genuine and welcome regulatory advance for women. India now has, for the first time, a statutory framework that names synthetic content, mandates its labelling, and imposes enforceable obligations on platforms. As these rules come into force close to International Women’s Day, they arrive at a moment when the global conversation about women’s rights and AI governance is converging.
Of course, regulation and justice aren’t the same things. The women and girls targeted by deepfake abuse need more than labelling mandates and takedown timelines. They need criminal laws that recognise synthetic intimate image abuse as a specific offence. They need law enforcement that can act on the compressed timelines the rules now require.
What does IWD 2026 theme ask for: Rights. Justice. Action. India’s new rules are a meaningful piece of the action – the AI Action. Its implementation and enforcement is another dimension that still has considerable ground to cover.
About the author: Shalini Sinha is the Global General Counsel (Media and Marketing) of The Magnum Ice Cream Company.
Disclaimer: The opinions expressed in this article are solely the personal views of the author, and do not necessarily reflect any views of The Magnum Ice Cream Company.
The opinions presented do not necessarily reflect the views of Bar & Bench.
If you would like your Deals, Columns, Press Releases to be published on Bar & Bench, please fill in the form available here.