[The Viewpoint] Artificial Intelligence-induced medical devices and the question of liability: Part II

The second of this two-part article continues from where the first part had left off.
Prashanth and Anita
Prashanth and Anita

In continuation of the first part of the article titled Artificial Intelligence-induced medical devices and the question of liability: Part I’, this article addresses the legal lacunae in determining liability in cases of ‘machine errors’. Machine errors occurring in the medical industry are usually termed as Medical Negligence cases. However, determination of liability of the person creates ambiguity.

Since there are many parties involved in AIMD such as the clinician handling the machine, the builder of the medical device, the doctor who directed the patient to undergo such an activity and finally the hospital, which is the employer, different formulas must be used to determine such liability. The liability can either be vicarious or strict, depending on the approach taken. These formulas are essentially legal doctrines or concepts that are applied to cases of medical malpractice.

a) “Respondeat Superior” theory:

The “master is responsible” theory, imposes an obligation on the employer for actions of the clinician [Jorstad, Kyle T. "Intersection of artificial intelligence and medicine: tort liability in the technological age." Journal of Medical Artificial Intelligence [Online], 3 (2020): n. pag. Web. 27 Feb. 2021]. An employer would either be the hospital or the natural person who permits the usage of the AIiMD in the organization. The shortcomings in both situations are briefly analyzed below.

i. Liability on the hospital:

When the hospital is held liable for machine errors, possible consequences would be paying damages, cancelling the license of the AIMD and cease future use. This imposes a huge duty of care on the hospitals. Hospitals must use, advanced machines with great success rates to handle such situations. Error of judgment plays a crucial role here as spotting a machine error is not possible which eventually reduces the liability on the doctor.

ii. Liability on the Principal:

Principal herein refers to the medical practitioner who permitted the use of AIMD by a clinician. It is a combination of vicarious liability and knowledge of the said field that justifies the liability, even though machine errors are not essentially under the control of the operators. Herein, master and employee are two humans whereas in the case of AIMD, only the principal is a human being. Therefore, the master-servant relationship demanding consent for legal validity cannot be applied to AIMD which is not capable of giving consent. Therefore, application of this theory does not address the ambiguity as it lacks the main ingredients such as consent in the relationship and legally sound human as employee.

b) Personhood:

The personhood argument emanates from tort law that categorizes medical malpractice claims under assault and battery which are provided with punitive damages. Tort law is largely based on the Principle of “reasonable person” principle to determine liability which also bases the personhood argument. Similar to how corporate entities are recognised as legal persons, machines in itself will be made liable for its actions and be given the status of ‘legal person’ which can also be considered ‘product liability’. However, even corporate entities are not individually liable under any law and a natural person responsible for such entity is brought to court which clearly cancels out the possibility of AI incorporating this narrative to determine liability. The recent case of Thaler v. Hirshfeld, 20-903 wherein personhood to AI was rejected, can be considered a classic example of the drawbacks in this argument. Therefore, granting individual legal status to AI medical devices and empowering them will do nothing beyond a ban on the machine when it malfunctions.

c) Intellectual Property laws approach:

In the medical field, most of the AI products involve human assistance to work and produce results. Most of these medical devices are patented and used through the issue of license. “In India, for patenting an AI backed technology, one needs to follow Computer-related Inventions (CRIs) guidelines which excludes a computer program or algorithm from being patented.” Though algorithms and any machine learning technology is ignored of patent protection under the Patents Act, 1970, there are two patents granted to a proactive user interface for a computational device (as per IN239319) and a system that facilitates displaying objects (as per IN228347). India has taken the UK’s approach in granting IP protection to artificial intelligence where it does not grant any right with the user but with the builder.

This might create complications only when the machine commits a data breach or other such issues with the use of machine learning also called as data feeding that is beyond the control of any human. A failure of the machine which is within the scope of black box algorithms, does not have any active connection to the medical practitioner or the copyright owner who could not have sensed such failure prior to the incident. In such a situation, the copyright owner would be held liable though the participation in such activity is not yet confirmed. This can cause serious risks with medical devices especially while deciding the liability for the failure of a medical procedure such as a scan that has resulted in wrong or miscalculations.

Conclusion

The uncertainty in determining liability not only affects the patients or the health care system, but also the development of technology. It barricades the advancement in technology since liability remains a question and in a field like health care, such a question cannot be left open as it involves lives of people. The different kinds of approaches and formulas discussed above however do not give a convincing answer to the question since health care systems are still adapting to AI though it has rapidly occupied the system. Given that, the only viable solution to the issue is informing the patients about the AI medical devices in detail before the usage.

Obtaining informed consent from the patient is not a new regime to the hospitals since it is mandatory for surgeries and childbirths. Further, GDPR provides for “Right to explanation” as something fundamental when it comes to data collection which is also included as ‘Right to information’ under the Digital Data Protection Bill, 2022. Few hospitals have adopted software that can detect the errors in the AI results and therefore avoid mis-happenings based on those results. Google has been able to do that by its deep learning mechanisms that allows for efficient detection. However, such software’s installments cost a huge amount due to their limited availability. Hospitals and labs can therefore resort to informed consent till such software is accessible for all since informed consent is not a permanent fix to the issue, rather a precautionary approach.

As we are moving towards a smarter society, technology is inevitable and adaptation to the same is essential for our survival. To that end, a competent regulation is a must since there exists lack of awareness and research not only in law, even in the Health Care field resulting in accidents. In India, the issue has not been addressed by courts yet thereby creating ambiguity. The existing Medical Device Rules, 2020 by imposing various conditions for use of an AIMD is on the right path though improvement is needed to deal with high end AIMD like Corti. Therefore, to deal with these situations which are the results of technological advancements, competent legislation and understanding by the courts is the way forward.

Prashanth Shivadass is an Advocate & the Founder of Shivadass & Shivadass (Law Chambers) and Anita Bharathi is an Associate at the same.

Bar and Bench - Indian Legal news
www.barandbench.com