

In Phaedrus, Socrates warns that writing, far from aiding understanding, may offer only the semblance of wisdom. He tells the myth of Theuth, the inventor of writing, who promises that his art will improve memory. The king replies that it will instead create forgetfulness, for learners will rely on external marks rather than their own recollection.
Socrates’ concern was not with writing as a tool, but with its effect on thought. By substituting reading for dialogue, he feared, people would mistake the familiarity of words for genuine knowledge.
In our own time, this ancient anxiety has returned with new force. Artificial intelligence now generates arguments, opinions and even simulated oral advocacy that appear reasoned but are in fact products of linguistic prediction rather than understanding. Just as writing could record the appearance of knowledge without reproducing the act of knowing, AI can mimic legal reasoning without engaging in judgment or responsibility.
This concern is no longer abstract. In July 2025, Adam Unikowsky, a prominent US appellate lawyer, conducted a public experiment using Claude 4.0 Opus, a large language model developed by Anthropic. He supplied it with the complete record of Williams v. Reed, including briefs, opinions and hypothetical questions from oral argument, and asked it to respond as counsel would. With minimal prompting, the model generated oral advocacy that Unikowsky described as “outstanding.” Using text-to-speech software such as ElevenLabs, the responses even acquired a human-like cadence and strategic rhythm. At one point, when asked a question about the Twenty-First Amendment, the system produced three coherent and distinct answers, a feat Unikowsky admitted he could not have achieved himself in six hours. He concluded that AI may soon extend beyond research or drafting to the domain of oral advocacy itself.
What makes his account striking is its restraint. It is neither promotional nor speculative, but a factual report of an experiment in real conditions. Precisely because it lacks exaggeration, its implications are difficult to dismiss. In jurisdictions like India, where oral argument remains central to appellate and constitutional litigation, the idea that a machine might replicate not only the logic but also the tone, timing and responsiveness of argument raises a profound question: What remains uniquely human in the role of the advocate? If machines can now “speak” legal language fluently but without consciousness or accountability, what becomes of those who speak for the law?
This article examines three foundational issues raised by AI in legal practice: (i) the nature of reasoning and rhetorical adaptability, (ii) the fragility of confidentiality and privilege and (iii) the problem of responsibility and accountability. Through these inquiries, we explore whether machines can truly engage in legal argument, or only simulate it.
At the appellate level, advocacy is not merely about finding answers but about reasoning through uncertainty. Lawyers operate in spaces where the law is unsettled, precedents conflict and judges’ interventions reshape the argumentative landscape. This process is inherently recursive: advocates return to earlier points, reformulate positions and calibrate their tone to preserve credibility. Every such choice reflects memory, strategy and intuition - capacities that arise from human judgment, not computation.
Current AI systems, particularly large language models like GPT or Claude, do not reason through uncertainty in this way. They generate responses by predicting statistically probable word sequences based on patterns in large-scale training data. Fluency can, therefore, mask superficiality: what sounds reasoned may simply be statistically consistent. These systems cannot distinguish between rules that are binding and those that remain contested, nor can they appreciate the moral or institutional consequences of choosing one interpretation over another. Most importantly, they cannot intend a strategy. When a human advocate revises an argument, it signals adaptation - an awareness of context and risk. When a model revises its answer, it merely varies its word patterns. There is no internal self that chooses one course over another. If two AI systems were to debate, they could continue indefinitely, lacking any recognition of closure, emphasis, or concession.
Real advocacy, however, depends as much on restraint as on eloquence. A pause, a moment of silence, or an admission of weakness can strengthen credibility. Such judgment arises from lived experience and self-awareness, qualities that machines lack. AI can reproduce the surface rhythm of reasoning but not its inner movement; it cannot feel uncertainty or recognise meaning in hesitation. What it performs is linguistic coherence, not reflective thought.
Legal systems rest on the presumption that speech is attributable. Every statement in court - whether a filing, a representation, or an argument -carries a name and a signature. This link between words and person underlies professional discipline, malpractice liability and the moral authority of the advocate. That link begins to blur when the words are generated by a system that lacks understanding, intent or agency.
At present, courts and regulators implicitly treat AI tools as extensions of the user, similar to grammar checkers or research databases. Under this approach, responsibility lies entirely with the human operator. This makes sense when AI is used for mechanical assistance, but it becomes problematic when the system generates substantive reasoning. When a model constructs arguments, identifies precedents, or simulates oral advocacy, it no longer merely assists thought; it substitutes for it.
This creates a new kind of accountability gap. A human lawyer can justify their reasoning, cite their sources and correct errors. A language model cannot. Its outputs are generated not from knowledge but from probabilities. This opacity gives rise to the phenomenon of “hallucination,” where models confidently produce incorrect or even fictitious cases, as occurred in Mata v. Avianca Inc, where attorneys were sanctioned for citing AI-fabricated precedents. Such errors are not moral lapses by the machine, they are structural consequences of its design. Existing liability doctrines are ill-suited to this scenario. In professional negligence, the standard is what a “reasonable practitioner” would do. But how can that standard apply when a practitioner uses a system whose internal logic is inaccessible and whose accuracy depends on prompt phrasing? Similarly, in copyright law, authorship presupposes creative control. Yet, when users cannot predict or explain what an AI will produce, control becomes largely nominal.
Artificial intelligence has begun to imitate the form of legal reasoning, but imitation is not understanding. What it produces may sound like argument, but it lacks the inward deliberation that gives advocacy its moral and intellectual weight. Legal practice depends not only on linguistic coherence but on the capacity to weigh consequences, to anticipate effects and to stand by one’s words. These are acts of conscience, not computation.
When machines generate speech without awareness or accountability, the foundational link between language and responsibility weakens. The authority of advocacy does not come from eloquence alone, but from the human act of reasoning in public and accepting the consequences of that reasoning. To separate argument from responsibility is to hollow out the ethical structure that makes law possible as a shared enterprise of judgment.
The question, therefore, is not whether AI can assist lawyers (it already does), but whether it can ever reason. Until a system can understand what it means to argue, to choose and to answer for its choices, it will remain a tool rather than a participant in law’s dialogue. The challenge before the legal community is not to resist innovation, but to ensure that technological progress does not erode the moral foundation of legal speech. Machines may refine the process of argument, but they cannot yet share in its purpose.
Harshvardhan Mudgal is a final-year student at MNLU, Mumbai.
Kirti Goel is a judicial clerk at the Supreme Court.