When AI gets it wrong, who pays?

The liability vacuum Indian founders cannot afford to ignore.
Rashi Jyotishi, Pranay Sawant
Rashi Jyotishi, Pranay Sawant
Published on
6 min read
Listen to this article

A Belgian man followed the advice of an AI chatbot and died by suicide. After six weeks of intensive exchanges with a chatbot named “Eliza” on the application Chai, the man, a 30-year-old father of two, anxious about climate change, took his own life. His widow later told Belgian newspaper La Libre: “Without these conversations with the chatbot, my husband would still be here.” Belgium’s then Secretary of State for Digitisation described it as “a serious precedent that must be taken very seriously.”

In the United States, a lawyer filed a court brief citing legal precedents. Every case was fabricated by an AI. In Fletcher v. Experian Information Solutions, Inc., No. 25-20086 (5th Cir. Feb. 18, 2026), the court fined appellate counsel Heather Hersh USD 2,500 for submitting a brief filled with fabricated content. The court found 16 fake quotations and 5 serious misstatements of law and facts. Chief Judge Jennifer Elrod held this conduct to be “unbecoming of a member of the bar” under Federal Rule of Appellate Procedure 46(c). The court made it clear that relying on AI is not a valid excuse for such errors.

These are not cautionary tales from the distant future. They are today’s headlines.

Yet, when we speak to Indian startup founders using AI across healthcare, legal tech, fintech, or e-commerce, a clear gap appears. Most focus on building and scaling their product, but very few consider what happens when their AI gets it wrong. Until recently, there was little regulatory clarity. But since February 2026, that silence is being replaced by a wave of high-impact, though still fragmented, regulation.

The international reckoning

Across major jurisdictions, a core legal question is emerging: when AI causes harm, who is responsible?

The European Union moved first and most ambitiously. The EU AI Act, effective from August 2024 and being implemented in phases through 2026, introduces a risk-based framework. High-risk AI systems, especially in healthcare, employment, education, and critical infrastructure, must meet strict compliance standards, including documentation, testing, and human oversight. Non-compliance can lead to penalties of up to €35 million or 7 per cent of global turnover.

Alongside the AI Act, the EU revised its Product Liability Directive in October 2024 to cover software and AI systems. This means that if an AI system causes harm, liability can arise without proving negligence. Responsibility may extend to developers, importers, and deployers. With the proposed AI Liability Directive withdrawn in February 2025, this revised framework now plays the central role in consumer protection.

In the United States, the approach is more fragmented. There is no dedicated federal AI liability law. Instead, existing legal principles are applied, such as product liability, platform immunity under Section 230, and professional negligence. Courts are actively addressing these issues. In Fletcher v. Experian (2026), the court confirmed that current legal rules are sufficient to address misuse of AI. Researcher Damien Charlotin’s publicly available database now tracks over 1,353 judicial decisions on AI-hallucinated content globally, with cases being added at five to six per day as of April 2026.

India’s patchwork, now rapidly hardening

India’s response is no longer just a patchwork of statutes designed for a pre-AI era. As of early 2026, it is a rapidly hardening, if still fragmented, compliance regime.

The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, effective from 20 February 2026, is a major development. It formally recognises Synthetically Generated Information (SGI). Under Rule 2(1)(wa), covering AI-generated or AI-altered audio-visual content that appears real. This means startups can no longer treat AI disclosures as optional. If synthetic content is not properly labelled, there may be a compliance issue even if the content itself is not harmful.

Transparency is now a legal requirement. Visual synthetic content may need clear watermarks, while audio content may need disclaimers. The Rules also introduce strict takedown timelines, including urgent action for government-flagged unlawful content and non-consensual intimate imagery. This pushes startups to build compliance into the product from the beginning, instead of reacting only after a problem arises.

For founders, the new Rule 2(1B) gives some protection to platforms that act responsibly. Earlier, founders feared that proactive scanning or removal of AI-generated misinformation could affect their safe harbour protection under Section 79 of the IT Act, 2000. The amended rule clarifies that automated removal of unlawful SGI, including through AI tools, will not by itself be used against the platform. This encourages responsible moderation.

The foundational problem with Section 79 itself remains unresolved. The IT Act’s definition of “intermediary” was conceived in an era of passive platforms, and as the Supreme Court recognised in Shreya Singhal v. Union of India, (2015) 5 SCC 1, safe harbour protections are conditioned on a platform not initiating or selecting the impugned content. This creates uncertainty for AI products. India’s AI Governance Guidelines also recognise that safe harbour may not clearly apply where AI systems generate or modify content. As a result, developer and deployer liability remains unclear.

The Digital Personal Data Protection Act, 2023 improves India’s data protection framework through consent, data minimisation, and breach reporting duties. But it is still mainly a data law. It does not clearly decide liability where AI causes physical, financial, or reputational harm. If an AI tool creates a false credit report, gives an incorrect medical suggestion, or produces dangerous legal advice, the DPDP Act may not fully address the harm because it regulates data handling, not the consequences of AI outputs.

The ‘deficiency in service’ trap

While the IT Rules deal mainly with content, the Consumer Protection Act, 2019 may become the key law for AI-related harm in India. If an AI tool gives incorrect healthcare, financial, or legal guidance, users may argue that the platform provided a “deficient service” under Section 2(11) of the Act.

This is especially relevant for high-risk sectors. For example, if an AI symptom-checker gives an incorrect medical suggestion, the platform may not be able to rely only on a “beta version” disclaimer. Regulators and consumer forums are likely to ask whether the company had proper safeguards, including a Human-in-the-Loop review mechanism.

The level of human oversight should depend on the risk. A chatbot suggesting a movie may not need human review. But a chatbot suggesting medical dosage, legal strategy, or financial action needs a much stronger safety layer. Many founders have not yet built this distinction into their product design, and that is where the real liability risk begins

The practical stakes for Indian Founders

For Indian startups using AI in consumer-facing products, the risk is immediate, not theoretical. A wrong medical suggestion, faulty legal advice, or biased credit decision can cause real harm, but India’s current framework still does not clearly define who is liable and to what extent.

In this gap, contracts are carrying most of the burden. Terms of use, disclaimers, liability caps, and indemnities are being used as the practical AI risk framework. But these protections have limits. If a user suffers measurable harm and the startup had no reasonable safeguards, a disclaimer alone may not protect the company.

The expected Digital India Act may shift India from broad safe harbour protection to a graded responsibility model. This would make traceability, audit trails, human oversight, and clear allocation of responsibility across the AI value chain critical. Founders should build these controls now, so they can identify whether a failure occurred at the model level, application level, or user-deployment level.

The awareness gap is also significant. AI founders in the US, UK, EU, Singapore, UAE, and Canada are already considering risk tiers, compliance timelines, insurance, and liability allocation. Many Indian founders building equally sophisticated products are still treating these issues as secondary.

India is moving toward stricter AI governance. The 2026 IT Rules amendments are an early signal. Startups that wait for the law to fully settle may face serious legal exposure, loss of user trust, and investor concern.

The way forward

India needs clarity, not necessarily a comprehensive AI Act, but at minimum a defined liability framework. Three things would move the needle: formal adoption of the value chain model from the 2025 Governance Guidelines; explicit safe harbours under the Consumer Protection framework for AI meeting certified safety standards; and sectoral mandates from RBI, IRDAI, and SEBI, because risk is highest precisely where the law is most silent.

Until then, founders should be conducting AI-specific risk assessments, building HITL mechanisms for high-stakes outputs, and reviewing their contractual liability architecture,  not because the law compels it today, but because it will.

The question is not whether AI will cause harm in India. It already is. The question is whether, when it does, Indian law will know whose door to knock on.

About the author: Rashi Jyotishi is Chief Compliance Officer and Pranay Sawant is an Associate at Outsource 360 Business Solution.

Bar and Bench - Indian Legal news
www.barandbench.com