Since the launch of ChatGPT in November 2022, generative AI tools have permeated all walks of life, the legal field being no exception. Over time, these tools have gained acceptance, with various instances of AI tools going head-to-head with lawyers, including in the CaseCrunch lawyer challenge and ChatGPT going to (National) Law School.
With a growing market of large language models targeting the legal field, the debate around adopting generative AI tools no longer remains a question of “if” but “how.”
While such tools may not completely replace lawyers, one cannot ignore the glaring fact that soon humans with AI will definitely replace humans without AI. Undoubtedly, generative AI tools aid lawyers in completing labour intensive, mundane and repetitive tasks, allowing them the much-needed time and energy to focus on critical tasks requiring legal acumen and analysis.
Slowly but steadily, these tools are now gaining recognition in India as well, with Harvey predicting India’s legal services market to reach $45.2 billion in revenue in 2024, and $67.4 billion by 2030. In fact, not only are legal databases such as SCC Online and Manupatra partnering with Harvey and Legora, law firms are also adopting such tools.
No doubt, these tools make legal services more accessible and cost effective. However, these also create risks impacting confidentiality, attorney-client privilege and data security. For instance, while AI tools help deliver a more cost and time efficient representation, such efficiency may come at the cost of confidentiality, as tools function on cloud-based infrastructure, requiring users to upload client data. This leads to concerns regarding storage, processing and transmission of confidential client information.
Another concern is that if the security measures of a platform are not strong enough, it may severely compromise client data. Study shows that cyberattacks against UK law firms surged by 77% in 2024. Similarly, 25% of US law firms experienced cyber-attacks in 2023.
In this piece, we discuss whether India is ready to tackle such concerns, given the ever-evolving nature of generative AI tools.
USA
As early as 2019, the American Bar Association (ABA) took note of the growing concerns relating AI tools and passed a resolution urging courts and lawyers to address issues of bias, explainability and transparency of automated decisions made by AI, the ethical and beneficial usage of AI as well as controls and oversight of AI vendors providing AI tools.
Thereafter, the ABA released Formal Opinion 512 of its Standing Committee on Ethics and Professional Responsibility on July 29, 2024. The Committee highlighted that “lawyers using generative artificial intelligence tools must fully consider their applicable ethical obligations, including their duties to provide competent legal representation, to protect client information, to communicate with clients, to supervise their employees and agents, to advance only meritorious claims and contentions, to ensure candor toward the tribunal and to charge reasonable fees.”
The Committee provided guidance on how these duties, which are encapsulated in the ABA’s Model Rules of Professional Conduct, have evolved with the introduction of AI tools.
For instance, when tackling the duty of ‘competence’ contained in Rule 1.1, the Committee emphasised that independent verification of AI-generated outputs to mitigate the risk of inaccuracies is essential to competently advice clients about their legal rights. Similarly, on the duty under Rule 1.6 to preserve the confidentiality, the Committee highlighted that a client’s informed consent must be taken before putting confidential information into an AI tool. The client should be apprised (a) why the tool is being used; (b) extent of and specific information about risks, including kind of information to be disclosed and how others may use this information; and (c) benefit of such tools for representation.
Thus, USA while recognizing that AI is here to stay, cautions lawyers to use such tools responsibly and be transparent with their clients, apprising them of all risks and benefits while seeking their consent to for using AI tools.
United Kingdom
The Bar Council of UK has updated its ethics guidance on use of ChatGPT and generative AI software based on large language models, emphasising the need to maintain control and integrity in the use of such tools. However, it clarified that these considerations are prepared in good faith to assist barristers and chambers and “neither the BSB nor bodies regulating information security, nor the Legal Ombudsman is bound by any views or advice expressed in it”. While highlighting risks of AI tools, including hallucinations, bias in training data and cyber-security vulnerabilities, the Council called upon lawyers to understand and responsibly use such tools, ensuring compliance with applicable law, rules and professional code of conduct.
Other developments to regulate the use of AI tools include the government’s white paper on A pro-innovation approach to AI regulations and the Information Commissioner’s publication on Generative AI: eight questions that developers and users need to ask.
These developments were prompted by R (Ayinde) v. London Borough of Haringsey [2025] and MS v. Secretary of State for the Home Department Bangladesh [2025], in which fake case laws generated by AI were cited.
Given these recent instances, Barbara Mills KC, Chair of the Bar Council, emphasised that output of AI tools must be viewed as work of a trainee solicitor/pupil barrister whose work, while being under supervision and training, cannot be independently signed off.
India has not remained immune to the risks of AI tools, with cases of lawyers relying on precedents which do not exist, including KMG Wires v. NFAC Delhi and Greenopolis Welfare Association v. Narender Singh and Ors.
From an Indian perspective, the Advocates Act, 1961 and the Bar Council of India Rules set out an advocate’s duties, including the duty to not mislead the court, acting in the client’s best interests, confidentiality and giving competent legal advice. However, there is a concerning lacuna on the use of AI tools. Consequently, in recent times, we have seen various courts issuing guidelines to address this critical issue.
For instance, in July 2025, the Kerala High Court introduced a Policy regarding use of Artificial Intelligence Tools in District Judiciary. The policy lays down guidelines for district judiciary members and employees while using AI tools, including exercising extreme caution when using AI tools. It calls upon judicial officers to meticulously verify AI results. The policy also cautions against using AI tools for arriving at any finding and emphasises that the responsibility of the content and integrity of a judicial order lies fully with judges. Similarly, it warns against use of cloud-based services, except approved tools, given concerns of confidentiality. Further, it calls upon courts to maintain a detailed audit of all instances of use of AI tools along with the human verification process adopted. Finally, the policy provides that any violation will result in disciplinary action, with rules of disciplinary proceedings prevailing.
The Bombay Bar Association issued similar guidelines on use of AI tools.
Thereafter, in November 2025, the Supreme Court released a white paper on AI and Judiciary, highlighting role of AI in improving case management, legal research and transparency. It lists indigenous tools adopted, including SUPACE for analysing case records, SUVAS for translating judgments into 19 languages and TERES for real-time transcription. While flagging risks of AI integration, it emphasised on mandatory human verification, confidentiality, disclosure requirements, etc.
Thus, in comparison to global counterparts, India till now has adopted a more ad hoc approach, with specific courts issuing guidelines for regulating the use of AI tools.
These efforts may not per se be inadequate and may even be considered to elevate flexibility over straight-jacket pan-India regulations. The absence of a clear framework for India as a whole remains a significant gap, given the ad hoc and fragmented manner in which these guidelines are being issued.
Against this backdrop, the recent introduction of Artificial Intelligence (Ethics and Accountability) Bill, 2025 is a significant steps towards regulating the use of AI in India. The Bill seeks to establish an ethics and accountability framework for the use of AI tools in decision making, surveillance and algorithmic systems. While this attempt is a step in the right direction, it still may not be the answer to regulate the use of AI tools by lawyers. Consequently, the need for a pan-India framework, similar to USA and United Kingdom, continues to exist.
Urvashi Misra is a Partner and Shailja Rawal is an Associate at AZB & Partners.