Between Brussels and Washington, can India script its own AI destiny?

India needs to harmonise its domestic standards with international norms, while preserving the flexibility to address uniquely Indian challenges.
Law Library AI
Law Library AI
Published on
5 min read

As the world races to define the rules for artificial intelligence (AI), two global superpowers are pulling the future in opposite directions.

In Europe, the regulatory hammer is coming down hard, with the EU AI Act set to transform how AI is developed, deployed and governed, especially with its provisions on General Purpose AI (GPAI) becoming enforceable by August 2025. Meanwhile, across the Atlantic, the United States is marching to the beat of a different drum. President Donald Trump’s newly unveiled AI Action Plan sends a powerful message: unleash innovation, cut the red tape and fund only those who don’t slow down AI with “burdensome” regulation.

In this tug of war between cautious regulation and unbridled innovation, where does India, an emerging tech superpower, stand? And more importantly, how should it move forward?

The Indian context

India is at a defining moment in its digital journey. Home to one of the world’s youngest tech-savvy populations, a thriving startup ecosystem and a vast pool of engineering talent, India has all the ingredients to become an AI powerhouse. Estimates suggest AI could contribute up to about 20% of India’s GDP by 2025. Yet, paradoxically, India currently lacks a dedicated AI law. The absence of a concrete regulatory framework has earned it the tag of a “soft-touch jurisdiction” - one that attracts global companies eager to develop and deploy AI models without facing the friction of compliance-heavy regimes like the EU.

But that status is a double-edged sword. Without clear guardrails, India risks becoming a playground for experimental AI models that can’t pass muster in more tightly regulated regions. There’s also the danger of homegrown systems infringing on privacy, deepening societal biases, or creating opaque automated decision-making processes with no recourse for the common citizen. On the flip side, jumping head-first into overregulation could snuff out the very innovation India wants to lead.

This is where India must make a bold yet balanced choice. It doesn’t need to choose between Brussels and Washington; it needs to learn from both and build something distinctly Indian.

Take the EU AI Act. It offers a robust, rights-based framework rooted in fundamental principles: transparency, human oversight and risk-based classification. High-risk AI applications like biometric surveillance or predictive policing face stringent compliance obligations, while lower-risk systems face fewer requirements. For GPAI models like large language models and image generators, the Act mandates technical documentation, risk mitigation plans and even summaries of the training datasets used. This ensures that AI isn’t just efficient; it’s safe, fair and explainable.

Now contrast this with Trump’s AI vision. His administration proposes stripping down regulations, removing state-level AI laws that create friction and tying federal funding to regulatory compliance, or the lack of it. The emphasis is clear: keep the US globally dominant in AI by cutting the “bureaucratic fat” and betting on the private sector’s ability to self-regulate. In a world where speed often beats perfection, this model might well supercharge short-term innovation. But it also opens the floodgates to ethical blind spots, opaque decision-making and unchecked surveillance.

How to take a balanced approach

India’s response must be nuanced. The goal should not be to copy either model, but to create a regulation framework that protects citizens without paralysing innovation. Here’s how India can do that:

First, a risk-based approach makes practical sense. Like the EU, India can identify high-risk use cases and regulate them tightly. AI systems used in policing, financial credit scoring, or public welfare schemes should undergo independent audits and require human oversight. But simpler applications, like AI for chatbots or translation, should enjoy regulatory breathing space.

Second, India should reward responsible innovation. Instead of creating a minefield of approvals, it can introduce regulatory sandboxes where startups and developers can test their AI systems in a controlled environment with light-touch oversight. AI models built around fairness, explainability and data privacy should be incentivised through tax breaks, public procurement preferences, or fast-track certification.

Third, India doesn’t need a giant standalone AI law just yet. It can embed AI accountability mechanisms into existing laws like the forthcoming Digital India Act or amendments to the Information Technology Act. These can introduce basic standards around algorithmic transparency, data use disclosures and redressal mechanisms for automated decisions.

However, this also demands that India gets serious about data governance. The Digital Personal Data Protection (DPDP) Act, 2023 (not yet enforced) provides a decent foundation, but AI involves more than personal data. There’s the challenge of non-personal data, inferred data, synthetic datasets and the need for real-time consent frameworks. AI models trained on massive public datasets, sometimes scraped without authorisation, pose serious legal questions. Addressing these will be vital to ensure that innovation doesn’t come at the cost of consent.

Beyond rules, India needs to build regulatory muscle. Who will audit these models? Who will check for algorithmic bias or harm? Who will even understand how these systems function under the hood? India needs to invest in institutional capacity, from AI officers within existing regulators like the Ministry of Electronics and Information Technology (MeitY) or the Telecom Regulatory Authority of India (TRAI), to perhaps a future AI Governance Authority with the power to investigate, audit and enforce compliance.

Then there’s the global angle. As Indian companies begin exporting AI solutions or integrating with global platforms, they’ll inevitably come under the scanner of EU AI Act obligations, US State AI laws and requirements of countries where the AI is being deployed. If India wants to position itself as a global hub for ethical AI, it needs to harmonise its domestic standards with international norms, while preserving the flexibility to address uniquely Indian challenges.

So what’s the way forward?

India must walk the tightrope between innovation and protection. Over-regulation at this stage could handicap its startups and stifle experimentation. But a hands-off approach risks unleashing AI systems with real-world harms - discriminatory algorithms in job hiring, opaque systems in welfare distribution or biased policing tools.

The path ahead should focus on four key principles: (1) Regulate what’s risky, not what’s new; (2) Build agile, sector-specific rules rather than one-size-fits-all laws; (3) Empower institutions to understand and govern AI meaningfully; and (4) Align globally but adapt locally.

The EU and US may be setting the tone, but India has the chance to set the example. A model where rights aren’t sacrificed at the altar of progress, and innovation isn’t slowed by fear. A model that protects the people, powers the economy and proves that smart regulation is not the enemy of scale, but its greatest ally.

The next 24 months will be pivotal. With the EU AI Act taking effect and global AI policies intensifying, India must move quickly yet thoughtfully. If it gets this right, it won’t just be catching up to the AI giants; it could define how the rest of the world approaches this extraordinary technology.

Kartikeya Rawal is the Director - Legal at Swiggy Limited.

Bar and Bench - Indian Legal news
www.barandbench.com