Regulating children’s social media use in India: Why a ban alone will not protect our young

A blanket ban would ignore the reality that for many, digital platforms are gateways to education, visibility and social mobility.
Social Media
Social Media
Published on
7 min read

The renewed calls to ban social media for children under sixteen come from a place of genuine concern. In February 2026, three minor sisters in Ghaziabad lost their lives, with preliminary reports pointing to suicide, addiction and parental conflict. When such events occur, society and lawmakers understandably look for rapid solutions and, increasingly, that solution has been framed as prohibition.

Australia’s decision to impose a minimum age of sixteen for access to major social media platforms has further accelerated this debate in India. Courts, lawmakers and state governments are now actively considering whether India should follow a similar path. It is understandable that the government wants to act quickly. But policies made in a hurry may ignore India’s realities. The real question is not whether children should be protected, but how they should be protected.

India is also a signatory to the UNCRC, which recognises children as rights-holders with evolving capacities (Article 5) and guarantees rights to information, participation and education (Articles 12, 13, 17, and 28). Yet while the impulse to ban is understandable, prohibition alone cannot address the structural causes of digital harm and may create new risks in a country as socially and digitally unequal as India.

Evidence of harm and the global turn to age restrictions

Global research increasingly links heavy social media use among adolescents with anxiety, depression, self-harm tendencies, body image dissatisfaction and declining attention spans. However, much of this research is drawn from Western populations, while comparable large-scale longitudinal data on Indian children remains limited.

Recently, India’s Economic Survey 2025–26 formally acknowledged rising digital addiction among youth and called for platforms to enforce age verification and age-appropriate defaults, including limits on autoplay features, gambling applications and targeted advertising. India today has over 490 million social media users, with adolescents forming a rapidly growing segment of this population.

A recent nationwide survey found that 49% of urban Indian parents of children aged 9–17 report their children spending three hours or more daily on social media, videos/OTT and online gaming.

Against this backdrop, Australia enacted the Online Safety Amendment (Social Media Minimum Age) Act, 2024, which came into force in December 2025. The law requires social media platforms to take reasonable stepsto prevent under-16 users from holding accounts, backed by significant financial penalties for non-compliance. Importantly, the framework includes privacy safeguards and restricts platforms from demanding government identification, leaving implementation details to delegated legislation.

Moreover, countries like Spain, France, the UK, Italy, Greece and Germany are also deliberating comparable measures.

India’s legal momentum

India has already begun moving in this direction. In December 2025, the Madurai Bench of the Madras High Court suggested that the Union government could explore an Australia-like framework. Notably, the case concerned children’s access to explicit online content more broadly, not social media alone.

These discussions also intersect with India’s existing digital governance framework, particularly Rule 3 and Rule 4 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which impose due diligence obligations on intermediaries, including requirements to remove unlawful content, maintain grievance redressal mechanisms and implement additional compliance measures for large social media platforms.

India’s data protection framework already addresses some of these concerns. Section 9 of the Digital Personal Data Protection Act, 2023 requires platforms to obtain verifiable parental consent before processing children’s data, restricts behavioural monitoring and targeted advertising aimed at minors and requires companies to prioritise the child’s well-being.

A private member’s bill in parliament proposes prohibiting under-16 users from maintaining social media accounts and places obligations on platforms to adopt highly effective age-verification mechanisms. Several states have initiated parallel efforts: Maharashtra is forming a task force; Goa is examining feasibility; Kerala has invited public inputs; and Andhra Pradesh is considering restrictions on social media use for children under 13, with a decision expected after consultations and possible rollout within 90 days.

Most recently, Karnataka has moved toward a stricter regulatory approach. In its 2026–27 State Budget, the State government announced plans to introduce a ban on social media use for children under the age of sixteen. If implemented, Karnataka could become the first Indian state to impose such a statutory restriction on children’s access to social media platforms.

India’s digital reality

Yet, India is not Australia. In much of the developed world, smartphones and social media are framed as lifestyle choices. In India, they are increasingly tools of necessity.

Nearly half the population lives in rural or semi-urban areas, where digital access enables online schooling, language learning, exposure to career pathways and basic digital literacy. For first-generation learners, a mobile phone may be the only bridge to information beyond their immediate surroundings. For many families, connectivity is not entertainment; it is an opportunity.

Social media also serves as a lifeline for rural adolescents, urban slum dwellers, marginalised youth and differently-abled children seeking peer support and community. Removing that access entirely risks isolating precisely those young people who already sit at the margins.

A blanket ban does not distinguish between harmful engagement and constructive use. It treats every child as equally vulnerable and every interaction as equally dangerous, ignoring the reality that for many, digital platforms are gateways to education, visibility and social mobility.

In a country where digital literacy itself remains uneven, exclusion risks becoming permanent.

Enforcement, privacy and unintended consequences

There are also serious practical concerns. Age verification is notoriously difficult to enforce. Adolescents routinely bypass age gates using fake birthdays or VPNs. Digital rights experts warn that such bans often push young users away from regulated platforms into encrypted and unmoderated corners of the internet, where grooming and extremist content flourish unchecked.

In India, where accounts are frequently created through shared devices or family members, assumptions of individual ownership collapse further. Enforcing age thresholds would likely require verifying nearly every internet user, introducing significant privacy risks. Any such system would also have to satisfy the constitutional privacy framework established in Justice KS Puttaswamy v. Union of India (2017), which requires state measures affecting personal data to meet standards of legality, necessity and proportionality. Such measures also implicate informational privacy protections recognised under Article 21.

While Australia’s law restricts the collection of government IDs, any large-scale age assurance system risks evolving into a surveillance infrastructure.

These concerns also raise constitutional questions. In Anuradha Bhasin v. Union of India (2020), the Supreme Court recognised that access to the internet is integral to the exercise of freedom of speech and expression under Article 19(1)(a) of the Constitution of India. While the case concerned internet shutdowns rather than social media specifically, it affirmed that restrictions on internet access must satisfy the test of proportionality.

There is also the gendered impact. As per the GSMA Mobile Gender Gap Report 2025, women remain 33% less likely than men to use mobile internet. In patriarchal households, age-based mandates are unlikely to produce nuanced compliance. Instead, families may simply confiscate devices from girls altogether, deepening existing digital inequality and cutting off pathways to education and social mobility.

The overlooked dimension: Artificial intelligence

While outrage focuses narrowly on social media, children are increasingly engaging with generative AI tools for emotional and mental health advice. Early research links intensive AI use to cognitive decline, while recent reporting highlights sexualised interactions with minors and alleged links to self-harm in conversational AI systems.

Unlike social media platforms, many generative AI systems currently operate with far fewer child-safety safeguards, despite increasingly serving as informal sources of advice and emotional support for young users.

If child safety is truly the objective, regulation cannot stop at social platforms. It must extend consistently across emerging technologies.

Towards a balanced Indian framework

India does not need a symbolic ban; it needs structural reform. The following suggestions could help build a more balanced regulatory framework:

  • Platforms must be placed under enforceable duties of care, requiring algorithmic risk assessments, limits on addictive design, age-appropriate defaults and restrictions on targeted advertising to minors. These obligations should be overseen by an independent expert regulator, not absorbed into bureaucratic structures vulnerable to political influence.

  • Much of the risk associated with social media arises not merely from access but from engagement-driven algorithms that amplify sensational or emotionally charged content in order to maximise user attention.

  • Competition regulation in digital markets is essential to address engagement-driven business models that monetise children’s attention.

  • Age verification should be proportionate and privacy-protective, not surveillance-heavy.

  • Parents and schools need better tools to guide children’s digital use.

  • Child safety rules should apply to AI systems as well as social media platforms.

An alternative worth exploring is a graduated access model. Instead of a hard ban, children under 13 could be limited to educational tools with strict defaults and parental oversight, expanding to supervised, time-limited access between 13–16, before full autonomy at 16. This approach recognises children as developing citizens rather than passive subjects, balancing autonomy with protection.

For instance:

  • Under-13 accounts: restricted to a verified educational layer (YouTube Kids-style, no algorithmic feed, strict time caps).

  • Ages 13–16: supervised public profiles with daily usage limits enforced at the device or platform level.

  • Age 16+: full autonomy.

Countries such as South Korea and China have implemented time-restricted measures for young users. South Korea’s former “shutdown” (or “Cinderella”) rule (2011–2021) barred late-night online gaming for minors, while China’s 2021 rules limit under-18s to roughly three hours of gaming per week. The European Union’s Digital Services Act has instead introduced platform-level child-safety obligations.

Australia’s experience from age restrictions to its News Media Bargaining Code, shows that sustained regulatory pressure can reshape platform behaviour. While no single country can regulate the internet alone, coordinated approaches can still force global change.

Conclusion

A social media ban offers the comforting illusion of control. It allows leaders to show they “did something” after tragedy. But the cost of such simplicity may be borne by the very children it seeks to protect, especially girls and those from disadvantaged communities for whom digital access is not leisure but lifeline.

India’s task is harder than prohibition. It requires confronting platform incentives, strengthening accountability, protecting privacy, investing in research and building digital literacy in a deeply unequal society. It also requires recognising that children’s right to dignity, education and participation extends into digital spaces.

For India, protecting children cannot mean removing them from the digital world altogether. It must mean building safer digital environments in which young people can learn, participate, and grow.

Shivam Jadaun is a Delhi-based lawyer and tech consultant specialising in technology law, AI, and tech policy.

Bar and Bench - Indian Legal news
www.barandbench.com