Justice Prathiba Singh at AI summit 
Columns

What the India AI Impact Summit means for Indian legal practice

What the profession still lacks is structured, enforceable guidance on how lawyers must interact with AI tools before a matter reaches the bench at all.

Manoj Rahul

On February 17, 2026, while Bharat Mandapam hosted what commentators described as the largest AI summit in history, the Supreme Court of India delivered a warning of an entirely different kind.

A Bench of Chief Justice Surya Kant and Justices BV Nagarathna and Justice Joymalya Bagchi flagged what the Chief Justice called an "alarming" trend of lawyers filing petitions drafted with AI tools that cite non-existent judgments. Justice Nagarathna pointed to a petition that had placed before the Court a case titled "Mercy vs Mankind" as a binding authority. No such judgment exists. CJI Kant added that in Justice Dipankar Datta's court, "not one but a series of such judgments were cited", all fabricated. Justice Bagchi separately lamented that even where real judgments were cited, the quoted passages were sometimes invented, creating what Justice Nagarathna described as "an additional burden on the part of the judges" who must now independently verify basic citations before engaging with a petition's merits.

Outside the same courtroom building, 88 countries were endorsing the New Delhi Declaration at the India AI Impact Summit, pledging inclusive, trusted and human- centric AI for all of humanity. The collision of these two events on the same day is not ironic; it is instructive. India is simultaneously announcing its ambition to lead global AI governance and confronting, in its own apex court, the first serious governance failure that agentic AI has produced in the legal profession. Both realities must be understood together.

A worrying pattern

The Supreme Court's February observations were not a spontaneous reaction. They arrived as the third in a sequence of escalating judicial rebukes over AI-generated fabrications in legal proceedings. In December 2025, a Bench of Justices Dipankar Datta and AG Masih expressed severe displeasure after AI-generated fictitious case laws surfaced in a commercial dispute, describing the error as "grave" and "terrible". The Bombay High Court had gone further still and imposed costs on a litigant after AI-generated fake citations were discovered in submissions, underscoring that professional negligence cannot be excused simply because a technology produced the error.

The Artificial Intelligence Advisory Board, finalising its report in 2025, had identified 125 AI tools globally capable of improving judicial efficiency. It drew guidelines permitting AI use for case listing, docket analytics, legal research and translation. These are legitimate and valuable applications. What the guidelines did not anticipate was the scenario now before the courts: advocates using generative AI not to assist research, but to substitute for it, producing petitions in which citations, quotations and even entire judgments are products of machine hallucination rather than judicial record.

The Chief Justice's response urging verification of research was a lesson to the Bar. The problem is that a judicial rebuke, however firm, is not a governance framework. Former Chief Justice BR Gavai had observed that justice involves "ethical considerations, empathy and contextual understanding" that algorithms cannot replicate. What the profession still lacks is structured, enforceable guidance on how lawyers must interact with AI tools before a matter reaches the bench at all.

Three fault lines the summit exposed

I. Professional responsibility and the duty to verify

The Bar Council of India (BCI) Standards of Professional Conduct and Etiquette, framed long before large language models existed, are entirely silent on AI delegation. The duty of competence has always required an advocate to verify every authority placed before a court. That obligation has not changed. What has changed is that an AI system can now generate a citation, a quotation and even a plausible-sounding case name with the same fluency and apparent confidence with which it states verified facts, making the error invisible to a lawyer who does not independently check.

Comparable jurisdictions have moved to address this. The American Bar Association's Formal Opinion 512, issued in July 2024, held that lawyers using AI tools must fully discharge their obligations of competence, confidentiality and supervision, regardless of how the tool operates or what it claims to produce. The Bar Council of England and Wales updated its AI guidance in November 2025, warning explicitly of the dangers of AI misuse in advocacy and stressing that supervisory duties apply whether the tool is a general-purpose chatbot or a specialist legal research platform. The BCI has issued no comparable guidance. The courts are filling that vacuum through admonition, which is neither systematic nor sufficient.

II. Data sovereignty and confidentiality

Agentic AI systems require data to function. A system tasked with drafting a petition or conducting legal research must ingest the client's materials and the infrastructure on which that ingestion occurs frequently sits on servers outside India. The Digital Personal Data Protection Act, 2023 is still maturing and its interaction with attorney-client privilege in AI processing contexts is entirely unaddressed in existing bar rules. The Delhi Summit announced significant investment in domestic AI infrastructure, including a commitment of USD 100 billion for data centres on Indian soil. Until that infrastructure is operational and legally certified for professional use, practitioners transmitting privileged client materials to foreign-hosted AI systems do so without any clear professional guidance on whether they have met or breached their confidentiality obligations under Indian law.

III. Access to justice and harm at scale

The summit's central theme of AI for all carries genuine resonance in a country where legal services remain deeply inaccessible to most citizens and where more than 5.4 crore cases are pending across courts as of February 2026. AI tools that extend affordable legal assistance to individuals and small enterprises serve a legitimate public interest and address a real structural gap in India's justice system. The Supreme Court's February observations, however, make plain that the same tools, deployed without verification disciplines, cause systemic harm. A fictitious citation in a single Supreme Court petition misleads judges and wastes judicial time. The same error replicated through an agentic access-to-justice tool across thousands of district court filings would represent a compounding institutional crisis, one that the Legal Services Authorities Act was designed to prevent rather than enable.

What must follow

The New Delhi Declaration is aspirational and non-binding. The Frontier AI Commitments adopted at the summit are voluntary instruments crafted for governments, not for bar councils or courts. The legal profession cannot look to multilateral declarations to answer questions that the Supreme Court has placed squarely before it in the language of professional duty. Structural responses are needed from within the profession itself and they are needed now.

The BCI may constitute an expert committee, drawing in senior practitioners, legal technologists, data protection specialists and consumer advocates, to issue enforceable guidance covering four things at minimum.

First, the duty of competence as applied specifically to AI-assisted legal drafting, with a clear requirement that every authority sourced through an AI tool be independently verified against an authorised database before filing.

Second, mandatory disclosure obligations whenever AI has played a material role in preparing a submission.

Third, confidentiality obligations governing the transmission of privileged client material to AI systems.

Fourth, the consequences, in conduct and costs, that may follow from filing AI-generated fabrications before a court.

The BCI's silence is difficult to justify when three successive Supreme Court benches have now described what is happening in filings as alarming, grave and absolutely uncalled for.

High Courts may consider practice directions requiring disclosure of AI use in filings, modelled on emerging practice in Pennsylvania and New York, where courts now mandate explicit disclosure and impose annual AI competency obligations on practitioners. The Law Commission may take up AI-generated work product and AI-assisted evidence as a discrete area of law reform.

Finally, the organised Bar should engage with the IndiaAI Safety Institute, which the summit positioned as India's institutional home for AI benchmarking, to develop legal-specific accuracy standards covering citation reliability, hallucination rates and explainability requirements for AI tools used in legal practice.

Manoj Rahul is a final year B.A. LL.B. student at Damodaram Sanjivayya National Law University (DSNLU), Visakhapatnam.

Delhi HC seeks CBI's stance on plea to re-investigate death of former Railway Minister LN Mishra in 1975 blasts

WBNUJS wins CNLU-DPIIT-IPR National Moot

CBI moves Delhi High Court against discharge of Arvind Kejriwal, Manish Sisodia, others in excise policy case

Grossest abuse of office: Supreme Court on judge who filed case against own brother

Too tired and hungry to write judgment: Allahabad High Court judge after hearing case till 7:10 pm

SCROLL FOR NEXT