

Chief Justice of India (CJI) BR Gavai on Monday remarked that judges are well aware of the misuse of Artificial Intelligence and other digital tools against the judiciary.
The Court was hearing a public interest litigation (PIL) petition seeking directions to the Union government to frame guidelines or a policy for regulating the use of Generative AI (GenAI) in the Indian judiciary.
"Yes, Yes we have seen our morphed pictures too," the CJI remarked.
"You want it to be dismissed now or see after two weeks," he further asked the counsel for the petitioner.
The Bench eventually adjourned the matter to be taken up after two weeks.
The petition filed by lawyer Kartikeya Rawal has sought the enactment of an appropriate law or the constitution of a comprehensive legislative or policy framework for the regulated and uniform use of GenAI in Judicial and quasi-judicial bodies of India.
As per the plea, there is a difference between AI and GenAI, with the latter being capable of creating ambiguity in the legal system by generating new data and non-existent case laws.
"The characteristic of GenAI being a black box and having opaqueness has the possibility of creating an ambiguity in the legal system followed in India. In other words, the skill of Gen AI to leverage advanced neural networks and unsupervised learning to generate new data, uncover hidden patterns, and automate complex processes can lead to ‘hallucinations’, resulting in fake case laws, AI bias, and lengthy observations. This process of hallucinations would mean that the GenAI would not be based on precedents but on a law that might not even exist. Such arbitrariness is a clear violation of Article 14," the plea has claimed.
According to the petitioner, GenAI can create realistic images, generate content such as graphics and text, answer questions, explain complex concepts, and convert language into code.
The quality of data directly impacts the outcome of GenAI in terms of bias.
As per the petition,
"The skill of Gen AI to leverage advanced neural networks and unsupervised learning to generate new data, uncover hidden patterns, and automate complex processes can lead to ‘hallucinations’, resulting in fake case laws, AI bias, and lengthy observations."
Further, GenAI algorithms may replicate, perpetuate, and even aggravate pre-existing biases, discrimination, and stereotypical practices, thereby presenting profound ethical and legal challenges, the petitioner says.
Prejudiced practices of the real world often lead to discrimination against marginalised persons or communities, which feeds into the data used by the GenAI system, the plea has pointed out.
Thus, AI integrated into the judiciary and judicial functions should have data that is free from bias and data ownership should be transparent enough to ensure stakeholders’ liability, the petitioner has contended.