AI chatbot Grok has reportedly been used to facilitate the generation of sexual images of children and non-consensual deep fake nude images. The ease and rampancy with which safety guardrails in AI models can be overcome to sexualise and ‘nudify’ real images is causing serious public concern. The Internet Watch Foundation reported that AI-generated child sexual abuse imagery had doubled in the last one year, with material becoming increasingly extreme.
Every time a new technology enters the public sphere, the limits of free speech are re-tested. Technological shifts have compelled courts and regulators to reconsider how constitutional protections apply to new media and evolving means of expression. When audio-visual broadcasting media first emerged, the concern was whether the traditional limits applicable to print media could be extended to an audio-visual medium capable of provoking a far more immediate and visceral response from its audience.
This concern eventually led to the acceptance of pre-censorship of films in India. In KA Abbas v. Union of India, the Supreme Court justified a different and more stringent treatment of motion pictures when compared to books or newspapers on the ground that films could "stir up emotions more deeply than any other product of art".
A similar constitutional reckoning occurred with the advent of the internet. Digital technologies revolutionised access to information and communication, breaking down geographical barriers and enabling instantaneous dissemination of content to global audiences. In Shreya Singhal v. Union of India, the Supreme Court was confronted with the question of whether the internet could be treated in the same manner as older modes of communication. Recognising that the internet was indeed different from other modes of communication so as to justify provision of separate offences for free speech on the internet, the Court, however, held that there was no justification for relaxing judicial scrutiny on curbing the content of free speech on the internet. While striking down Section 66A of the Information Technology Act, 2000 for vagueness and overbreadth, the Court nonetheless acknowledged that the internet’s scale, speed and virality posed novel regulatory challenges. The ability of online speech to reach a vast audience within seconds justified a more careful calibration of restrictions, even as constitutional protections remained intact.
AI looms larger than life today and presents the latest and perhaps most complex challenge to regulators. Unlike earlier technologies, AI does not merely transmit or amplify speech; it actively generates content. This shift upsets traditional boundaries between hitherto watertight categories such as ‘speaker,’ ‘publisher’ and ‘intermediary.’
X’s in-house chatbot, Grok, is a vivid illustration of these blurred boundaries. Its ‘Spicy Mode’ is said to allow the generation of sexualised content, including nude or semi-nude images of both fictional and real individuals. The system is deliberately positioned as a less filtered, more irreverent alternative to competing AI models, encouraging users to push the limits. Grok Imagine enables users to generate images or clips using textual prompts, with relatively few guardrails beyond restrictions on extremely explicit content. The potential harm is enormous. The tool can reportedly generate realistic images of women - real or imagined - being stripped or sexualised without consent. Such capabilities open the floodgates to harassment, exploitation of the vulnerable - particularly women and children - and revenge pornography.
Elon Musk has defended the system as a tool for creativity, arguing that responsibility lies with the user rather than the technology. He is quoted to have said, “A pen doesn’t decide what gets written. The person holding it does. Grok works the same way.”
This argument becomes increasingly unsustainable when the tool is deliberately designed to encourage certain categories of expression and to lower the barriers to producing harmful content at scale. When a platform offers services that are structured to promote or facilitate specific activities, such as generating nude or sexually explicit images of real people, impersonating individuals, or distorting their likeness in degrading ways, can it still claim to be a passive intermediary that merely hosts third-party content? Or does it become an active participant, if not an aider and abettor, in the creation of unlawful or harmful material?
AI is fundamentally different from earlier communication technologies. Its defining feature is not transmission, but creation through artificial and engineered processes that overcome human limitations. AI dramatically reduces not only the cost, but also the effort and skill required to generate content, and simultaneously enables mass replication and dissemination. This combination makes it more susceptible to the risk of privacy and dignity violations. A single prompt can generate innumerable realistic images that may circulate online for a long time, causing irreparable loss of reputation, privacy and dignity.
The legal classification of AI platforms, therefore, becomes critical. If such platforms are treated as mere intermediaries under Indian law, they may seek protection under the safe harbour provisions of Section 79 of the Information Technology Act and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
An ‘intermediary’ is defined under Section 2(w) of the Information Technology Act, 2000 as “any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes.” The safe harbour provision under Section 79, which grants immunity to liability from third party conduct, is subject to the provisions of sub-sections (2) and (3) of Section 79. Sub-section (2) covers cases where the activity undertaken by the intermediary is of a technical, automatic or passive nature and where the intermediary has no knowledge or control over the information which is transmitted or stored.
The safe harbour protections were thus crafted on the assumption that intermediaries are largely passive entities that host or transmit user-generated content without initiating or modifying it. The question is whether this assumption holds when the platform itself designs, trains and deploys systems that generate offensive content in response to user prompts.
Should AI systems that generate images be regarded as publishers, or at least as entities exercising editorial or creative control? Or can they continue to claim the immunities available to intermediaries, despite playing an indispensable role in producing the very content at issue? This question has profound implications for platform liability and governance.
A second, related issue concerns whether AI ought to be subjected to more stringent restrictions by virtue of its nature and capacity for harm. The creation of distorted or fabricated images of individuals, whether celebrities or private persons, raises serious concerns under the rights to privacy, personality and publicity. The risks associated with deepfakes, non-consensual sexual imagery and obscene or offensive content are very real. Women, in particular, are disproportionately vulnerable to abuse through such technologies - a reality that demands urgent attention.
Even apart from gendered harms, the broader principle is that no individual should have the right to distort or manipulate another person’s image without consent. The fundamental right to freedom of speech and expression under Article 19(1)(a) of the Constitution is subject to specific restrictions under Article 19(2) which include restrictions on the ground of defamation, or in the interests of decency and morality. Further, free speech and creativity must be balanced with the right to life and personal liberty under Article 21, which the Supreme Court has consistently interpreted to include dignity, privacy, autonomy and informational self-determination.
The right to privacy recognised in KS Puttaswamy v. Union of India encompasses control over one’s identity and personal representation. Closely linked are personality and publicity rights, which protect an individual’s interest in how their image, likeness and persona are used in the public domain. Together, these rights imply that individuals have the authority to decide how they present themselves and how they are perceived by others. The unauthorised alteration, mutilation or sexualisation of a person’s image through AI-generated content directly undermines this constitutional guarantee.
From this perspective, the claim that AI-generated expression should enjoy the same level of constitutional protection as human speech becomes a debatable question. The challenge, therefore, lies at a complex intersection of platform governance, free speech boundaries, intellectual property and the rights to privacy, personality and publicity. The boundaries of existing legal frameworks must be pushed to address new challenges in the face of AI’s generative capabilities. As with cinema - and the internet before it - AI compels us to confront questions about where freedom ends and responsibility begins. When technology itself becomes a co-creator and participant, the constitutional balance must be re-calibrated accordingly.
Madhavi Goradia Divan is a Senior Advocate and the author of "Facets of Media Law".
The author acknowledges the contributions made by Advocates Aandrita Deb and Atharva Kotwal, and law students Priyank Dhaduk and Kshitij Chauhan.