Generative AI image 
News

Gujarat High Court bans use of AI by judges, court staff in judicial work

The policy prohibits the use of AI tools in the judicial decision-making process and fastens personal liability on concerned judges or court officers for AI-assisted outputs.

Debayan Roy

The Gujarat High Court has released a policy barring the use of artificial intelligence tools by judges and court staff in the State for drafting orders, judgment preparation or any other form of judicial decision-making.

The Policy on Use of Artificial Intelligence in Judicial and Court Administration also fastens personal liability on concerned judges or court officers for AI-assisted outputs.

The policy states that artificial intelligence "shall never be employed for any form of decision-making, judicial reasoning, order drafting, judgment preparation, bail/sentencing considerations, or any substantive adjudicatory process."

The policy has been framed under Articles 225 and 227 of the Constitution and is anchored in the right to fair hearing under Article 21.

It applies to all judicial officers, court staff, legal assistants, interns and para-legal volunteers engaged with the High Court of Gujarat and the district judiciary.

Under the policy, using AI to arrive at, determine or substantially influence any finding of fact, finding of law, or operative order in any judicial proceeding is expressly prohibited, even if such output is subsequently reviewed by a judge.

The use of AI has also been barred from sorting evidence, classifying evidence, organising evidentiary material, assessing credibility, filtering relevance, summarising depositions or testimony, or any task involving evaluation or categorisation of proof.

The policy outlines a few permissible areas where AI can be used for assistive work, namely:

Legal research: Artificial intelligence may be employed for legal research, retrieval or analysis of judgments, extraction of ratio decidendi, identification of precedents, statutory interpretation, or any preparatory intellectual work, supporting adjudication, but with all human conscience and subject to verification by the user. Such research work must be supported and confirmed by comparing the AI-generated information with the approval journals of the case laws. 

Administrative tasks: Code generation and automation of IT department tasks, creating templates for internal training purposes, drafting and improving circulars, notices, etc.

Drafting assistance, to improve the language, structure and clarity of draft orders, judgments and opinions, provided that the substantive legal analysis and reasoning remains entirely of the judge. 

However, the responsibility of ensuring there are no AI-generated errors during such supportive tasks lies with the user.

Any AI-generated output, once signed or authenticated by a user, becomes the sole responsibility of that user. The policy states that the signatory shall be held liable for any inaccuracies, errors or omissions contained within the authenticated material.

The policy mandates that AI-generated citations, case references, or statutory provisions cannot be used without independent verification from authoritative primary sources such as AIJEL, SCC Online, AIR, the Supreme Court website or official government gazettes.

A judge's responsibility for every order, judgment, and observation issued in their name cannot be delegated, shared with, or diminished by the use of any AI tool, the policy adds.

Further, every court officer is personally responsible for the accuracy and appropriateness of any AI-generated content used in the performance of their official duties.

"The use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence. Users cannot disclaim responsibility by attributing errors to an AI tool," the policy makes it clear.

Legal assistants, research associates, and judicial assistants who use AI tools to assist a judge are required to ensure that the concerned judge is informed of such use and of any AI-assisted output.

Where AI tools have been used in the preparation of any research note, bench memo or legal brief, the responsible officer is required to document this fact in the record.

The policy further bars the entry of personal and senstivite information into public AI tools. The categories of such information include names, addresses, or identifying information of parties, witnesses or advocates, details of pending proceedings or unreported orders, privileged communications or confidential legal strategies, sensitive personal data, including health, financial, biometric or caste-related information and evidence or documents filed in a case.

The policy notes that that AI systems may encode or perpetuate biases related to gender, religion, caste, ethnicity, or socio-economic status. Hence, users should not rely on AI outputs in a manner that promotes systemic bias in the justice system, it warns.

Violations of any provision of the policy shall be treated as misconduct and attract departmental or disciplinary proceedings under applicable service rules.

The policy further states that these consequences are in addition to any civil or criminal liability under applicable law, including the Information Technology Act, 2000 or the Bharatiya Nyay Sanhita, 2023.

The policy is subject to any direction or circular issued by the Supreme Court of India or its e-Committee regarding AI use in courts, or any policy issued by the legislature.

Proactivity of government to amend outdated family laws is zero; parents removing children illegally from India: Anil Malhotra

Supreme Court Justice NV Anjaria says pained by courtroom videos of judges acting out; calls for dignity

Punjab court denies anticipatory bail to civil judge accused of stealing jewellery from another judge's house

Sexual harassment case: Kerala court sends filmmaker Ranjith to police custody for 3 days

Courts not getting proper assistance from government lawyers: Allahabad High Court flags serious concern

SCROLL FOR NEXT