The Ministry of Electronics and IT (MeitY) recently took a proactive approach to tackle the growing concerns surrounding deepfake content on social media platforms. In response to the surge in misleading videos, notably one featuring actor Rashmika Mandanna's face superimposed on another woman's body, MeitY issued advisories, urging the swift removal of such content within a 24-hour timeframe.
This article delves into the profound implications of deepfakes, which have the potential to not only spread fake news, but also influence elections and fabricate convincingly fake evidence.
It also questions the effectiveness of current regulations in addressing the multi-faceted challenges posed by deepfake technology. Proposed amendments highlighted in the discussion include reinforcing privacy and data protection laws, imposing limitations on freedom of expression and establishing proactive rules to govern the distribution and use of deepfake technologies.
As deepfakes increasingly blur the boundaries between reality and fiction, the article emphasizes the critical importance of these regulatory considerations in safeguarding society, preserving democracy and upholding the rule of law.
Deepfake: Meaning and interpretations
The term "deepfake" itself emerged in 2017 on Reddit, where users began superimposing celebrities' faces onto different individuals, particularly in adult content. The article adopts a broad definition of deepfake, encompassing various manipulations in alignment with popular understanding. A deepfake, as conceived in this context, typically involves the creation of a video using advanced technical means to portray an individual saying or doing something they did not.
Detecting such manipulations proves challenging, with peripheral applications extending to realistic-looking videos generated without high-tech means, high-quality videos featuring non-existent individuals, fake audio or text fragments and manipulated satellite signals. This inclusive perspective is deemed crucial for legal and policy considerations, emphasizing outcomes over specific technical methods.
The gravity of the deepfake issue was further underscored in 2019, as a significant attack occurred when hackers impersonated a CEO's phone request, resulting in a $243,000 unauthorized bank transfer. This incident prompted heightened vigilance and precautionary measures within financial institutions, even as hackers continued to refine their techniques.
In 2021, criminals executed a sophisticated scheme exploiting knowledge about an impending company acquisition. A bank manager was deceived into transferring a staggering $35 million to a fraudulent account by strategically timing the attack with the company's expected wire transfer for the acquisition. This instance underscores the evolving threat landscape and emphasizes the pressing need for enhanced cybersecurity measures to counter deepfake attacks in the financial sector.
Laws against deepfakes in India and the world
In India, the legal framework against deepfakes involves Section 66E and Section 66D of the Information Technology Act of 2000. Section 66E addresses privacy violations related to the capture, publication, or transmission of a person's images in mass media through deepfake means. Such offence is punishable with up to three years of imprisonment or a fine of ₹2 lakh. Section 66D prosecutes individuals using communication devices or computer resources with malicious intent, leading to impersonation or cheating. An offence under this provision carries a penalty of up to three years imprisonment and/or a fine of ₹1 lakh.
Additionally, the Indian Copyright Act of 1957, particularly Section 51, provides copyright protection against unauthorized use of works, allowing copyright owners to take legal action.
Despite lacking specific deepfake legislation, the Ministry of Information and Broadcasting issued an advisory on January 9, 2023, urging media organizations to label manipulated content and exercise caution.
In the global context, concerns about AI manipulation led to the Bletchley Declaration, signed by 28 nations. Approaches to AI regulation vary worldwide, with the US, for instance, opting for stricter oversight. President Joe Biden's recent executive order mandates companies to share AI safety test results with the US government, emphasizing extensive testing before public release.
India has also outlined potential regulatory frameworks, suggesting a risk matrix and proposing a statutory authority. Tech giants like Alphabet, Meta, and OpenAI are taking steps such as watermarking to combat deepfakes. India, a pivotal player in AI's global development, must contribute to shaping the regulatory landscape while balancing innovation with regulatory concerns.
In its December 2019 publication, the World Intellectual Property Organization (WIPO) navigates the intricate landscape of deepfake content. The document, centered on intellectual property rights, poses two pivotal questions:
(i) Concerning the creation of deepfakes based on copyrightable data, the question arises: to whom should the copyright in a deep fake belong?
(ii) Should there be a system of equitable remuneration for individuals whose likenesses and "performances" are employed in a deep fake?
WIPO acknowledges the heightened complexities of deepfakes, surpassing conventional copyright infringements, encompassing violations of human rights, privacy and personal data protection. This prompts a critical examination of whether copyright should be granted to deepfake imagery at all. WIPO suggests that if deepfake content significantly contradicts the subject's life, it should not receive copyright protection.
In response to these queries, WIPO proposes that subject to copyright, deepfake copyright should belong to the creator rather than the source person. This proposal considers the lack of intervention or consent during the creation process, emphasizing the importance of acknowledging the creative input of the individual responsible for the deepfake.
WIPO asserts that copyright, in itself, is not an optimal tool against deepfakes due to the absence of copyright interest for victims. Instead, victims are encouraged to turn to the right of personal data protection. Citing Article 5(1) of the EU General Data Protection Regulation (GDPR), which mandates accurate and up-to-date personal data, WIPO recommends the prompt erasure or rectification of irrelevant, inaccurate, or false deepfake content.
Moreover, even if deepfake content is accurate, victims can leverage the "right to be forgotten" under Article 17 of GDPR, allowing the erasure of personal data without undue delay. This dual approach involving personal data protection rights is positioned as a more effective strategy in combating the multi-faceted challenges posed by deepfake content. WIPO thus emphasizes the need for a comprehensive approach that goes beyond traditional copyright frameworks to safeguard individuals from the adverse impacts of deepfake technology.
A question of evidence
Deepfake technology poses significant challenges in legal proceedings, particularly in criminal cases, with potential repercussions on individuals' personal and professional lives. The absence of mechanisms to authenticate evidence in most legal systems puts the onus on the defendant or opposing party to contest manipulation, potentially privatizing a pervasive problem. To address this, a proposed rule could mandate the authentication of evidence, possibly through entities like the Directorate of Forensic Science Services, before court admission, albeit with associated economic costs.
In India, existing laws offer some recourse against deepfake issues, but the lack of a clear legal definition hampers targeted prosecution. The evolving nature of deepfake technology compounds the challenges for automated detection systems, leading to increased difficulty, particularly in the face of contextual complexities. This poses a significant threat to legal proceedings, potentially prolonging trials and heightening the risk of false assumptions.
Beyond the immediate legal ramifications, deepfakes exacerbate issues such as slut-shaming and revenge porn, presenting serious consequences for individuals' reputations and self-image. The intricate challenges demand comprehensive legal frameworks to address evolving threats and safeguard individuals from potential harm.
The increasing introduction of deepfakes as fake evidence in courts raises significant concerns for the rule of law, manifesting in several ways:
a) Prolonged trials: Trials are likely to extend as parties can claim evidence fabrication.
b) Risk of accepting forged evidence: Deepfakes heighten the risk of courts mistakenly accepting forged evidence as authentic.
c) Public assertion of innocence: Individuals wrongly convicted can publicly assert their innocence, attributing their conviction to the court's acceptance of fake evidence.
d) Irreversible damage: Certain offences, even if disproven later, can irreversibly damage a person's life and career.
Notably, groups may persist with claims based on the initial fake message, affecting public perception. Sensational fake news initially garners more attention than subsequent debunking, leaving individuals with lingering doubts. As deepfakes become more prevalent, the potential ramifications for the legal system and societal trust underscore the urgent need for robust measures to authenticate evidence and address the evolving challenges.
Future course of action
As deepfake technology gains popularity, the potential negative impacts on personal interests, social institutions, and democracy become increasingly evident. While existing legal frameworks prohibit harmful deepfakes, challenges in enforcement persist. The debate surrounding ex ante regulations, such as banning consumer market deepfake technology or implementing mandatory legitimacy tests before publication, underscores the complexity of addressing this issue.
Concerns arise regarding the enforceability of such rules, considering that citizens can access deepfake technology globally. Additionally, the desirability of banning technologies raises questions about societal trust and freedom of expression. The validity of introducing bans is challenged by the evolving nature of social and ethical codes, with norms likely to develop gradually. Recognizing that ex ante rules may not eliminate underlying issues, such as online misogyny and fake news, which persist independently, similar hesitations apply to potential regulations restricting online expression.
This prompts an examination of additional protective measures for statements about public figures and addressing the spread of misinformation in political contexts. Striking a balance between regulation and preserving fundamental freedoms is deemed crucial in navigating this intricate landscape.
Harshvardhan Mudgal is a third year student at MNLU Mumbai.
He can be reached at email@example.com.