Technology giants Sundar Pichai, Elon Musk and Brad Smith have backed the regulation of artificial intelligence.
Google CEO Pichai, while writing for the Financial Times, warned against the dangers of keeping artificial intelligence unregulated. Pichai said issues such as “deep fakes” and the “nefarious use of facial recognition technology” show the possible negative impact of artificial intelligence on public safety. He added that artificial intelligence needs to be regulated to protect privacy, ensure public safety, and prevent bias from influencing technology.
At the United States National Governors' Association summer meeting in Providence, Rhode Island, Tesla CEO Elon Musk said that “artificial intelligence is the biggest risk to human civilisation.” Musk made his stance clear by suggesting that the Governors in the United States must proactively regulate artificial intelligence to avoid the dangers of industries becoming completely autonomous in the future and posing a serious threat to national security.
Microsoft President Brad Smith, while speaking at the World Economic Forum in Davos, Switzerland, stressed the importance of being proactive in regulating artificial intelligence. Smith said that it is the right time to regulate artificial intelligence and that the world should start putting in place the necessary ethics, principles and even rules to govern artificial intelligence, rather than wait for the technology to mature.
Artificial intelligence has changed the world and our daily lives. In the coming years, experts predict a meteoric growth in its use. Given the even greater role artificial intelligence is likely to play in our lives in the future, it is important for us to deliberate and discuss some important questions. This is in the best interest of governments, society, and individuals.
There are several unanswered questions that arise when it comes to regulating artificial intelligence. To begin with, should artificial intelligence be regulated at all? If yes, who should regulate artificial intelligence? Should industries using artificial intelligence be allowed to regulate themselves or should governments devise regulatory frameworks to regulate artificial intelligence? What should those regulations look like?
These are challenging questions, especially for a technology still in its development stages. We have two choices - either we could wait for the technology to mature further, or act proactively.
Should Artificial Intelligence be regulated?
Some technology experts like Azamat Abdoullaev suggest that artificial intelligence should not be regulated because it is a fundamental technology. When such technologies are regulated at the initial stages of development, it might hamper their growth. Furthermore, even if we intend to regulate artificial intelligence, nobody knows how to do it, at this point, he opines. Even more worrisome is the fact that we might end up allowing those people to regulate artificial intelligence, who may not have enough insight about the technology. This could have disastrous consequences, to say the least. Rather than regulating artificial intelligence, its applications such as cybersecurity, autonomous driving, and military need regulation.
Contrary to the above-stated view, some experts including Stephen Hawking and Bill Gates have taken a more cautious approach and proactive stance when it comes to regulating artificial intelligence. They believe artificial intelligence should be regulated before it is too late. The reason being that unchecked development of artificial intelligence by companies in the race to be faster than the others could pose an existential threat to mankind. It could destroy humanity if we are unable to avoid the risks of unchecked growth of artificial intelligence, such as powerful autonomous weapons. Experts believe there is enough cause to be concerned about the potential harms of artificial intelligence and regulatory measures are a must.
Heavy-handed State regulation versus self-regulation
We have two choices when it comes to regulation. Either we allow governments to regulate artificial intelligence, or we allow the market participants to regulate themselves. On the one hand, immediate and heavy-handed State regulation seems like a plausible solution to our problems. However, this route may have the unintended consequence of stifling innovation and hindering the growth of artificial intelligence. Every country in the world has a vested interest in becoming a world leader in artificial intelligence. Without a global consensus on imposing regulations on artificial intelligence, some countries would be left far behind in this race of being at the forefront of the next revolution and reaping its desired benefits.
On the other hand, we can take a laissez faire, hands-off approach and allow market participants to regulate themselves. However, the problem with self-regulation is that some companies might devise and practice ethical standards and develop “safe and sustainable artificial intelligence”, while others might simply not bother about setting ethical principles in the desire to be the first to develop cutting-edge artificial intelligence and become a market leader. A complete hands-off approach is undesirable. At the least, we require a common minimum by way of ethical standards that every company working with artificial intelligence would be compelled to follow.
Where does the Indian government stand on the regulation of Artificial Intelligence?
The Central government’s think tank NITI Aayog released a policy paper titled ‘National Strategy for Artificial Intelligence’ in June 2018, in which among other things, the benefits of artificial intelligence were discussed. The policy paper included the weaknesses of self-regulation of the technology. More recently, in its draft 'Working Document: Enforcement Mechanisms for Responsible #AIforAll' released in November 2020, NITI Aayog proposed an oversight body to manage artificial intelligence policy.
The oversight body is expected to be instrumental in devising guidelines for responsible behaviour and for regulating sectoral guidelines. It is proposed that the oversight body would have experts from several fields, including law, humanities, and the social sciences. It will adopt a ‘flexible risk-based approach’ to artificial intelligence, the report suggested. Furthermore, the oversight body is expected to play an enabling role in research, technical, legal, and societal issues emerging from artificial intelligence.
Prof GS Bajpai, criminal law professor and Vice-Chancellor of RGNUL, Patiala, in his June 2019 article notes that while there is rapid technological advancement, Indian Parliament has not formulated a comprehensive legislation to regulate the growing industry.
Tuhin Batra, a Delhi-based TMT lawyer, in his December 2020 article says that there are lacunae when it comes to the legal and regulatory framework to govern companies working with artificial intelligence in India. According to Patra, for the orderly and structured growth of the industry, self-audit and record-keeping by companies is a must.
To sum up, there is a growing consensus on the accelerated growth of artificial intelligence and its substantial impact on our everyday lives and the world. Rather than deliberating upon the impact of regulating artificial intelligence, we have to take a step back and lay down foundational principles on which regulations could be built in the future. Moreover, we need to make the work of our policy makers easier, by creating awareness about the fallout of artificial intelligence.
World leaders and their governments have to collectively work towards building consensus and developing a comprehensive set of global principles on artificial intelligence. Regulation of artificial intelligence is destined. It is just a matter of when it would be regulated, who would regulate it, and what the regulations would look like.
Shireen Moti is an Assistant Professor of Law at OP Jindal Global University, where she teaches courses on Constitutional Law.