Artificial Intelligence (AI) is a branch of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. AI encompasses a wide range of techniques, algorithms, and methodologies. Some common approaches used in AI include machine learning, natural language processing, computer vision, expert systems, and robotics. As AI technologies continue to advance and permeate numerous industries, governments, and policymakers around the world are faced with the complex task of establishing regulatory frameworks to govern their development and deployment. Striking the balance between encouraging innovation, and speed and addressing ethical considerations is crucial.
India's AI industry is expanding across sectors like healthcare, finance, agriculture, education, and e-commerce. Startups and companies are developing AI applications like chatbots, virtual assistants, and predictive analytics. Major multinational companies like Wipro, Microsoft Research India, and Philips Innovation Centre have established research centers and AI-focused initiatives. However, challenges like investment, access to quality datasets, ethical considerations, privacy concerns, and skill gaps in emerging AI technologies need to be addressed. With continued investment and collaboration, India has the potential to become a global leader in AI research and development.
In India, the government views AI as a "kinetic enabler" for governance, while the Ministry of Electronics and Information Technology (MeitY) claims AI can deliver customized services through digital platforms.
The Government of India has launched various programs and initiatives to support research and development in AI, promote entrepreneurship, and encourage investments in the sector. For instance, (i) the National AI Strategy aims to position India as a global leader in AI research, development, and deployment, (ii) the establishment of the National e-Governance Division of MeitY and NASSCOM, and (iii) the Centre for Artificial Intelligence and Robotics (CAIR) which focuses on R&D in cutting-edge technologies like artificial intelligence, robotics, and information and communication security.
The State Government of Telangana has also launched INAI, an applied AI research center in collaboration with IIT Hyderabad, the Public Health Foundation of India, and the Public Health Foundation of India.
AI regulation in India is governed by laws protecting intellectual property, privacy, and cyber security. The Indian government has initiated several initiatives to support the growth of the nation's AI ecosystem, such as the NITI Aayog's "National Strategy for Artificial Intelligence" in 2018, which listed areas for AI deployment in healthcare, education, agriculture, smart cities, and mobility. Other regulators have also made efforts to discuss the application of AI technologies, such as the Securities and Exchange Board of India (SEBI) pushing for AI use for pattern recognition and data analytics, and the Reserve Bank of India (RBI) hiring professionals with advanced analytical, machine learning, and AI skills. India actively participates in international partnerships to harmonize its AI regulations with international norms, such as the Global Partnership on AI (GPAI), which aims to advance ethical AI development and application.
India is leveraging AI to enhance public service delivery and governance, aiming to improve service quality. The National AI Portal and Centre of Excellence for AI/ML are initiatives promoting AI research, innovation, and application among government agencies. The government is also considering AI in sectors like smart cities, agriculture, and public safety. Regulations are being developed to promote transparency, accountability, and fairness in AI-powered decision-making processes. India aims to align its AI laws with global trends to benefit from AI while addressing ethical, legal, and societal issues.
Ethical principles and norms are being developed by governments and organizations worldwide to support the moral development and application of AI. These guidelines emphasize transparency, equity, responsibility, and respect for human rights. They aim to explore the potential societal effects of AI by providing a roadmap for designers and users. Data privacy and protection are also crucial concerns, as AI systems heavily rely on personal information. Many nations are enacting or upgrading data protection laws to protect data, provide more control, and establish clear guidelines for AI application management.
Regulation of AI must address algorithmic responsibility, which is becoming more prominent as AI algorithms impact decision-making in areas like hiring, criminal justice, and credit scoring. Regulatory measures aim to reduce potential dangers related to AI decision-making by addressing algorithmic biases and offering oversight methods. Industry-specific legislation is being created to address the specific problems AI has brought about in different industries, such as healthcare, autonomous vehicles, and banking. Regulatory frameworks are being developed to define rules for AI technology creation, implementation, and application.
Countries are increasingly cooperating to understand the importance of global AI and the importance of international cooperation. International initiatives and alliances have evolved to promote collaboration, harmonize rules, and address cross-border issues. Working together, nations can handle the complexity of AI governance and ensure a coordinated global response to its societal impacts.
With AI, such transition would move beyond the IT sector and affect sectors such as education, health, agriculture, finance, data processing, public administration, infrastructure assessment, implementation of government welfare schemes, weather forecasts, etc, requiring the underlying skill sets. The demand for new-age jobs is accelerating in India, and can be attributed to three major factors i.e., (i) increased adoption of technology; (ii) shift in market demographics, and (iii) de-acceleration of globalization
Low adoption of AI technologies in India is particularly troubling, given the country’s prominence in the global IT industry which could have given it the natural first mover’s advantage in AI. AI adoption in India is slow due to
(i) lack of adequate talent to build and deploy AI systems,
(ii) Low awareness of AI,
(iii) High cost and low availability of infrastructure for development, training, and deployment of AI-based services and
(iv) Difficulty in access to industry-specific data required to build customised platforms and solutions currently concentrated in the hands of a few major players - It is difficult for new entrants to deliver tailor-made services that can compete with data-rich incumbents such as Facebook or Google.
The current vision for AI regulation places a strong emphasis on adaptable, policy-based methods designed to encourage the growing usage of AI in an open, ethical, and responsible way. The guiding principles of this framework include safety, non-discrimination, openness, and accountability, according to the many programs and policy papers on AI that the government has embraced. There are still significant questions, though, about the protection that should be given to AI inventions, how AI fits into corporate governance, and the liabilities associated with AI-based decision-making. In light of this, it is essential that the growing regulatory framework for AI concentrate on removing these current ambiguities while encouraging a rise in AI adoption.
The use of AI in decision-making and the "black box effect" have raised a number of concerns among governments and legal systems around the world, notably with regard to ensuring justice and accountability and protecting privacy rights. As a result, it should be noted that AI regulation is a dynamic, continuing process that necessitates constant adjustment to new technological advancements and changing societal demands. Global trends in AI regulation, including ethical standards, data protection, algorithmic accountability, industry-specific rules, international collaboration, and public involvement, show a commitment on the part of all parties to advancing responsible AI use. Policymakers can strike the right balance between encouraging innovation and ensuring that AI technologies are developed and used in a way that upholds ethical principles, protects individual rights, and benefits society as a whole by embracing these trends and putting in place effective regulations.
P Raviprasad is the Co-founder of Tempus Law Associates. Kamesh Vedula is a Senior Associate at the Firm.