AI and Lawyers 
Columns

What AI is taking from junior lawyers

The deeper question - whether a lawyer formed in the age of AI develops the same judgment as one formed without it - remains open.

Yajur Mittal

Every lawyer who has survived their first few years will recognise this, probably with a wince.

You draft the same type of indemnity clause for the tenth time that month. You spend 3 days reading through 400 pages of documents to produce a 1-page summary your senior glances at and sets aside. You draft a plaint and it is returned more red than black. And then, after many days that roll into nights and nights that roll back into days, somewhere around the end of the second year, if you are paying attention, something changes.

You begin to develop instincts you were never explicitly taught. You have read enough documents to sense, almost immediately, when something is missing and you know where to look. You have drafted enough pleadings badly enough to understand where the narrative breaks, where the story falls short, where the case is papering over weak facts.

Nobody gave you that. The work gave it to you, doing it from scratch, sometimes badly, repeatedly, until you started doing it less badly. Legal judgment was built through consistent engagement and practice. That was the training.

That process is now under serious, and largely unacknowledged, threat.

AI has arrived at the door and it does what most junior lawyers do.

Used well, it does something genuinely useful. It removes the cognitive drag of mechanical work and rote tasks and forces engagement at a higher level earlier on. A junior lawyer is no longer drafting clause 14.3 from scratch but reviewing whether the AI's clause 14.3 is fit for purpose for that one client, that particular deal and associated risks. It is more sophisticated, in theory.

But here is the question that is worth asking: does that different kind of work develop the same instincts?

Earlier, the mechanism was friction: the productive discomfort of not knowing, of sitting with a problem before the answer was available, of sifting through research and commentaries building the answer from scratch. That rigorous struggle is what developed the tolerance for uncertainty, the feel for where a case is thin, the ability to sense what is missing and build critical legal skills and judgment. The confidence that comes from having done something badly, understood why and improved.

A junior lawyer who reviews AI outputs rather than producing the underlying work from scratch is doing something cognitively different. Whether that different activity develops the same depth of instinct over time and whether interrogating a machine's reasoning builds legal judgment the way constructing that reasoning yourself once did, is genuinely unknown.

The tools are too new, the lawyers being formed by them are still in their first years and the results will not be visible for a decade. It is possible that the higher-level engagement AI enables produces a richer kind of development. It is equally possible that something essential is lost when the friction disappears. Nobody knows. And that is precisely the problem: the profession is not treating it as a question worth asking.

This concern is not speculative. A 2025 MIT Media Lab study found measurably weaker brain connectivity in regions associated with reasoning and memory among participants who relied on AI for analytical tasks, compared to those who worked through the same problems by themselves first.

Consider what happened to chess. It was among the first fields to be genuinely and completely outpaced by AI. Engines surpassed the best human players in the late 1990s and have only grown stronger since. The settled answer in chess training is: use the engines. Study with them. Build your openings with them. Analyse your games through them. Almost every elite player today has grown up with engines as a central part of their development.

And then there is Gukesh Dommaraju, the current world champion. 18 years old when he claimed the title in 2024, the youngest undisputed champion in history. His trainer, Vishnu Prasanna, made an unusual call early in his development to keep the engines away and to let Gukesh calculate his moves and sit with positions before introducing AI into his training.

Draw from that what you will. Chess and law are different disciplines. But it is worth sitting with the discomfort of the question. The best player in the world right now is someone who, at a critical period in his formation, was kept away from the machine and who credits that absence for the very quality that made him exceptional.

The legal profession is moving fast toward integration. That is probably right. But whether integration during formation produces the same depth of judgment as integration after struggle, that question has not been answered. Not even in chess.

The argument here is not against using AI or the desire to preserve the misery. A lawyer who treats non-adoption as a principled stance will simply be slower, less competitive and eventually priced out of the work. It is about when, and how, and it is firmly against letting AI do the thinking and the reasoning.

The Gujarat High Court, in a policy released this month, drew exactly this line for judges. The policy prohibits AI from being used in judicial reasoning, decision-making, order drafting, or any substantive adjudicatory process. But it expressly permits AI for legal research, retrieval of judgments, identification of precedents and preparatory intellectual work. The Court put it plainly: the substantive legal analysis and reasoning must remain entirely human.

This principle cuts just as sharply for lawyers in their formative years. AI can retrieve, summarise and identify. The moment a junior lawyer outsources the thinking to the machine without first formulating independent thought and fails to interrogate AI outputs, they are not using a tool. They are engaging in cognitive offloading - the erosion of analytical capacity that happens when the mental work is consistently delegated rather than done. In other words, they are skipping the only part of the work that might still build a lawyer.

A word of fairness: AI in its current form is not infallible. It hallucinates, it cites cases that do not exist, it misreads context and produces confident nonsense with impressive fluency. For now, someone still needs to check the output and at the first level, that someone is usually the most junior person in the room. There is, in that sense, a temporary reprieve built into the technology's current limitations. The accuracy gap is, however, closing, faster than most people in the legal profession are willing to acknowledge.

What comes next requires deliberate effort. New models of training must be built consciously to include structured mentorship. The answer is not to put the tools down; it is to use them with enough deliberateness that the thinking still happens. That means drafting the analytical skeleton before the AI sees the problem. It means questioning/interrogating the reasoning after reviewing the output, not just accepting it and stress-testing the arguments. None of this will happen on its own.

The deeper question - whether a lawyer formed in the age of AI develops the same judgment as one formed without it - remains open. By the time the answer becomes clear, the generation it concerns will already have been made.

Yajur Mittal is a dual-qualified disputes lawyer and Partner at Strata Law.

Legal Notes by Arvind Datar: The Many Facets of Federalism

Gaurav Rana launches boutique corporate & competition law firm Rana & Partners

Delhi High Court quashes order suspending principal of DU's Ramanujan College

Can High Courts exempt convicts from surrender using inherent powers? Supreme Court refers issue to larger Bench

Accused can't be made to share live GPS location as condition for bail: Karnataka High Court

SCROLL FOR NEXT