By Trishee Goyal
One of the bigger achievements of this winter parliament session has been the tabling of the Joint Parliamentary Committee report (JPC report) on the data protection law in the Parliament. This has brought clarity on many issues while leading to fierce contestation on others. A relatively lesser discussed but interesting facet of the JPC report and its draft Bill is the position it takes on social media.
Social media is built on the economics of personal data. Its business model is built on engaging users so that it maximises their time spent on its respective platform. This in turn increases generation of personal data on the platform, which is sold to third parties, so that they can target their advertisements better. Quite simply, the more they know about you, the more they can sell.
Studies have concluded that social media applications are the largest processors of personal data. Therefore, it is not surprising that social media entities form an important subset of regulated entities for data protection laws. In fact, these are a set of entities that are infamous for pioneering pernicious practices for personal data processing, requiring data protection regulations to often play catch up.
In India, the Justice Srikrishna Committee Report (JSK Report), and the 2018 draft of the Personal Data Protection Bill (PDP Bill) it proposed, did not recognise social media entities as a separate subset of regulated entities (data fiduciaries). As per the 2018 draft of the PDP Bill, social media entities were regulated in two ways. First, they were regulated as “data fiduciaries”, making them subject to various data protection obligations. Second, based on certain considerations, these data fiduciaries could also be categorised as “significant data fiduciaries”.
These considerations included the volume and sensitivity of personal data processed, the risk of harms arising from such processing and the technology used for the same. The underlying rationale was that the greater the risk a data fiduciary posed to the data protection and privacy of an individual, the greater should be its obligation to minimise the risk. Therefore, “significant data fiduciaries” had to comply with additional obligations over and above those of data fiduciaries simpliciter.
This approach was well received. Defining “social media” as a separate category was not required, since from a data protection and privacy perspective, the approach of categorising data fiduciaries depending on the risk they posed to data protection and privacy, instead of a functional categorisation, was adequate.
Introduction in the 2019 draft of the Bill
However, there was a shift in approach in the 2019 draft of the Bill, put forward by the government. First, in the preamble, it identified one of the Bill’s objectives to be “laying down norms for social media intermediary”, second, it defined the term “social media intermediary” (SMI), and third, it provided for a set of different criteria than the 2018 to designate an SMI as a significant data fiduciary (SDF). The new criteria were based on the cumulative conditions of number of users of an SMI and whether its “actions have, or are likely to have a significant impact on electoral democracy, security of the state, public order or the sovereignty and integrity of India”.
It was expected that with the notification of the Information Technology (Intermediary) Rules, 2021, which provide for an identical provision in Rule 4(7), the JPC would drop provisions relating to content regulation on social media. However, this was not the case. The JPC, in fact, while endorsing the government’s stand on the same, discusses this issue at quite some length. That discussion is important to understand the thinking behind, both in including such a provision in a data protection legislation as also the design of this regulation.
Approach in the JPC
The JPC touches upon the issue of content regulation online in a number of ways. Foremost, it identifies “a provision relating to “social media intermediary” ….. and to empower the Central government, in consultation with the Authority, to notify the said SMI as an SDF”, as one of the twelve salient features of the 2019 Bill. That it singles out and identifies this provision as a salient feature, which is a subset of the parent provision that designates “significant data fiduciaries”, instead of this parent provision that covers many other entities, is indicative of the primacy being given to it.
Further, while identifying themes that pertain to a data protection legislation, it identifies the issue of “proliferation of bots and fake accounts” and “data security” (presumably meaning disinformation, which it calls data security, that otherwise has a different meaning). In relation to the first, it comments upon the increased prevalence of fake accounts, which can “push a certain agenda or person, carry malicious campaigns, promote digital scams and even conduct organised phishing and blackmailing. It adopts a rather expansive understanding of fake accounts extending it to mean “accounts operated by humans in the name of other people, or fake names, multiple accounts by the same person and …. bots”.
Second, in reference to “data security”, the report discusses the use of social media for the purposes of engaging information warfare, especially by way of disinformation. Moreover, some of the key areas of concern that the JPC identifies in relation to SMIs, , while considering their regulation, include “anonymous publication of content on such platforms”, “criteria adopted … for removal of content”, “categorisation of such platforms as intermediaries” etc.
Clearly, these concerns, that the JPC identifies in relation to SMIs, go beyond those that stem from a data protection perspective. It is arguable that in so far as SMIs process personal data, for example, to tailor users’ feed to maximise engagement or abuse that data for other means, there are overlaps between content regulation and data protection issues. However, the remedies to these available in conventional data protection provisions are in the nature of purpose limitation, data minimisation, notice and consent. What the JPC recommends goes beyond this realm.
To its credit, it is conscious of this overreach, as it points out the existing law, in terms of the IT Act “has not been able to keep pace with the changing nature of the social media ecosystem”, and that “there is an immediate need to regulate social media intermediaries”. It is instructive to assess the recommendations of the JPC in this regard to understand the dissonance between them and a data protection law.
Recommendations of the JPC and espousal of PDP Bill, 2019
As mentioned above, the JPC Report endorses provisions relating to SMIs in the PDP Bill, 2019. First, when regulating them as “significant data fiduciaries”, it retains the designation of SMIs as a class separate from data fiduciaries, in general, as also the criteria for such designation, to regulate them as “significant data fiduciaries”.
In the PDP Bill, 2019, this was provided for under section 26(4), while under the Data Protection Bill, 2021 (attached with the JPC Report), as section 26(1)(f). This provision raises concerns at two levels. First, the repositioning by the JPC of this provision from section 26(4) to section 26(1)(f) raises some interpretational concerns. Second, with respect to both drafts of the Bill, there are some conceptual concerns about the grounds on the basis of which SMIs are to be so designated.
Criteria for designation of SMIs as SDFs
Section 26(1) of the PDP Bill, 2019 provides a list of criteria on the basis of which the Data Protection Authority (DPA) could notify a data fiduciary or a class of data fiduciary as SDF. As stated above, these criteria included the volume and sensitivity of personal data processed, the risk of harms arising from such processing and technology used for the same.
Section 26(4) provided that, notwithstanding, anything provided in section 26, the Central Government could notify, in consultation with the DPA, an SMI as a significant data fiduciary. The factors to be taken into account for this were, first, if the number of users were above the threshold notified by the Central government and second, that the actions of the SMI “have or are likely to have a significant impact on electoral democracy, security of the state, public order or the sovereignty and integrity of India”.
The Data Protection Bill, 2021 omits section 26(4) and adds this as section 26(1)(f). This effectively provides that the DPA shall based on “any (emphasis added) of the following factors, notify any data fiduciary or class of data fiduciary as significant data fiduciary, namely … any social media platform” along with the two criteria for SMIs.
This leads to two issues. First, in the PDP Bill, 2019, section 26(4) is fashioned in a non-obstante manner. It can be read to mean that while the DPA has the power to designate data fiduciaries (including SMIs) as SDFs based on criteria mentioned in sections 26(1)(a) to 26(1)(f) of the PDP Bill, the Central government, need not follow these criteria when it comes to social media intermediaries. It can designate them so on the basis of the number of users and the impact of actions of the SMI.
This is altered in the reading of section 26(1)(f) of the Data Protection Bill, 2021 in two ways. Section 26(1)(f) of the Data Protection Bill, 2021 reads that the DPA shall “having regard to any of the following factors notify any data fiduciary or class of data fiduciary as significant data fiduciary, namely .. any social media platform” with such number of users as notified, and whose actions can have certain kinds of impact.
First, this redrafting changes the agency that can notify SMIs as significant data fiduciaries, based on the criteria of number of users and impact of actions, from the Central Government to the DPA. It allows the Central government to notify the number of users (although, this does not come through the Data Protection Bill, 2021 due to typographical errors in section 26(1)(f)(i), it is clarified when read with section 94(2)(f)), while the overall designation is to be done by the DPA).
While the JPC acknowledges that it has received recommendations to change the notifying agency from the Central government to the DPA, it does not go further to discuss whether it agrees with such a change. This makes it unclear whether or not this was a deliberate change or an unintended consequence of the redrafting exercise. In case it is the former, it needs further consideration.
On the one hand, it allows for this decision to be taken at an arm’s length from the government, while on the other hand, it is circumspect how well the DPA will be equipped to take decisions on matters relating to “sovereignty and integrity of India, security of state, public order etc.” An ideal way forward would be for the government to apply to the DPA when it is of the opinion that such conditions exist, and the DPA can then decide ion on designating SMIs based on such application.
Second, the PDP Bill, 2019 allowed for designation of SMIs as SDF based on factors, in Section 26(1), while Central government was allowed the leeway to designate them so on two additional criteria of number of users and potential impact of actions. However, by redrafting section 26 under the Data Protection Bill, 2021, it now reads in a manner so as to limit this designation based on only these two criteria. This is not desirable.
The fundamental reason for designation of a data fiduciary as SDF is to impose additional obligations of data protection on the SDF. Factors such as volume and sensitivity of personal data processed, the risk of harms arising from such processing and technology used for the same as well as the turnover of the data fiduciary, are highly relevant for imposing enhanced obligations in the context of data protection. In fact, these are the kind of criteria, that are in line with data protection legislations in other jurisdictions to designate data fiduciaries as SDFs.
It is not clear whether the exclusion of these other criteria, in evaluating whether or not an SMI should be imposed with additional obligations, is deliberate or an unintended consequence of the redrafting. In any case, this must be clarified.
This brings one to the larger conceptual issue as to the relevancy of these criteria to data protection in the first place. In a data protection context, the criterion of “number of users” is insufficient to measure the vulnerability as compared to the volume and sensitivity of the data that is collected from such users. Further, the link between data protection and the criterion of potential impact the actions of the SMI have on “sovereignty and integrity of India, electoral democracy, security of the State or public order” is tenuous.
While this may have been included to prevent Cambridge Analytica type of situations with respect to Indian data subjects, provisions of purpose limitation, data minimisation, notice and consent, along with the criteria mentioned in section 26(1), are arguably, sufficient to tackle such abuse of personal data. Further, as pointed out above, the concerns that the JPC raises with respect to SMIs are primarily in relation to the content hosted by them.
Therefore, it is not a stretch to assume that this criterion could be made applicable based on the content that SMIs host. While most argue that the government may use this criterion to unduly encumber SMIs with additional obligations, which would impact freedom of speech and expression, this piece argues for reconsideration of this criterion on a different basis. The parliament has the legislative competence and the power to regulate online content based on reasonable restrictions provided under article 19(2).
However, the efficacy of regulating online content under the Data Protection Bill, 2021 needs to be considered. For doing so, the effect of being designated an SDF needs to be considered. This is generally in terms of imposition of additional data protection obligations on the SDF.
The Data Protection Bill, 2021 provides that SDFs, in general, would be required to undertake four additional obligations. First, they would have to conduct data protection impact assessment if they are processing data that carries risk of significant harm to the data principals. Second, certain record keeping obligations are imposed including those relating to operations in the data life cycle, reviews of security safeguards, DPIAs and any other matter which the DPA may specify. Third, SDFs are required to appoint an independent auditor to conduct an annual audit of its processing of personal data, and fourth, they are required to appoint a data protection officer.
These obligations pertain to measures for increased safeguards and compliance for personal data processing, and do not seem to have a direct nexus with respect to regulating content online. Therefore, the effect achieved by designating SMIs as SDFs, under the data protection law, based on the content they host, would not affect the spread of disinformation particularly.
Voluntary verification mechanism by SDFs
Apart from these general additional obligations on SDFs, there is, however, one obligation that is made particularly applicable to SMIs designated as SDFs. This is by way of retrofitting section 28 (4) to the 2018 draft of the Bill, that otherwise only dealt with the obligation of the SDFs to maintain updated records. Under section 28(4) of the PDP Bill, 2019, SMIs designated as SDFs are required to provide their users with a mechanism to voluntarily verify their social media accounts. Further, the SDFs have to provide, to all persons who voluntarily verify their accounts, a demonstrable mark of verification which is visible to all other users of the service.
The JPC argues in favour of this provision for two reasons. It sees it as a means to, first, curb disinformation and increase trust on the platform, and second, to hold the SMIs responsible for such disinformation. The JPC comments that SMIs today have gone beyond the role of intermediaries and “may be working as publishers of the content in many situations, owing to the fact that they have the ability to select the receiver of the content and also exercise control over the access to any such content hosted by them”. This is possibly in reference to the increasing concern across jurisdictions on SMIs curating the content they host to increase user engagement, often in disregard to other competing values.
It has been argued that given the SMIs’ role in promoting or demoting content created by users, through ranking and recommender systems, they have gone beyond the role of “dumb pipes” that afforded them immunity with respect to the content being carried in the first place. Given this evolution of the SMIs’ role and capabilities, concerns have been raised on how best to regulate them. The proposed solutions range from increasing transparency of algorithms deployed by SMIs, robust disclosures to users, increasing user choice in content moderation and reconsidering the “safe harbour” provision. The JPC adopts the last approach.
In India, Section 79 of the Information Technology Act, 2000 provides ‘safe harbour’ to the intermediaries from liability of the content they host. It provides that intermediaries would not be held liable for third party content that they host, if certain conditions are fulfilled. First, the intermediary should not initiate transmission, select the receiver or select or modify the information.
Second, the intermediary should not have played a role in aiding or abetting information that is in transgression of the law. Third, it should expeditiously remove the offending information when ordered to do so by the court or the government. Lastly, the intermediary should comply with certain due diligence requirements.
The JPC’s above mentioned observation that SMIs select receivers and exercise control over access, therefore, strikes at the root of this safe harbour provision. Presumably, in furtherance of this standpoint, it recommends redesignating them as “social media platforms” as opposed to social media “intermediaries”, a term that has conventionally been associated with safe harbour provisions. (An additional problem here is that it does not mention the exceptions in this new definition, the lack of which has resulted in considerable confusion even in the IT Rules, 2021). It, however, provides an exemption route from this liability by proposing that they “will be held responsible for the content from unverified accounts on their platforms”.
This means despite being considered publishers, and having safe harbour provisions revoked, the SMIs will not be liable if they verify credentials of users of their platforms. This position of the JPC is at odds with what is provided for in Section 28(4) of the PDP Bill, 2019 and the Data Protection Bill, 2021 which is couched in optional and enabling terms. As mentioned earlier, Section 28(4) is limited to only requiring SMIs to only provide for a verification mechanism that users can resort to if they want to voluntarily verify their credentials.
But read with the JPC’s recommendations, the manner in which this is operationalised by SMIs could be very different. It may lead them to derogate from a choice-based model to one that incentivises, if not, mandates user verification, albeit in covert ways. This would not be unusual. It has been shown previously that SMIs tend to err on the side of caution in their attempt to comply with government directives when it comes to content regulation. This tendency towards over-compliance is understandable given that non-compliance results in the loss of safe harbour provision, in addition to other adverse consequences. But this tendency towards over-compliance has two significant implications.
First, it can significantly impact the right to maintain anonymity online. Users will be under an apprehension that their safety, both online and offline could be compromised, given the potential of increased as well as unauthorised access to their identification. While in India the issue was deliberated by the Supreme Court and no clear answer emerged in the Puttaswamy judgments, anonymous / pseudonymous speech has been considered an integral part of right to freedom of speech and expression in other jurisdictions.
The argument here is not that the right to online anonymity is absolute and cannot be derogated from, but that in restricting it a risk based, proportionate approach must be undertaken. It should not be affected merely because of the SMIs’ perception of such signalling from the government.
Second, indiscriminate verification of users also leads to dilution of the principle of data minimisation. Verification of users should be based on certain classification and identification of users / accounts that are prone to being misused for the online harm that is to be combated. Indiscriminate verification of users by SMIs would lead to massive amounts of personal information being collected by them, which provides potential for data abuse and endangers data protection rights of users’, especially in a country where the consciousness with respect to right to privacy is still evolving.
Based on the abovementioned considerations, there is a need to rethink the regulation of SMIs, in so far as online content is concerned, through a data protection law, both at the conceptual level i.e. whether or not it ought to be regulated through the PDP Bill, 2019, as also at the operational level i.e. the manner in which such regulation is to take place.
For example, before undertaking the massive exercise of verification of online users, it needs to be considered how well does verification actually combat disinformation. Some empirical research has shown that identity verification may not deter fake news, but could in fact fuel proliferation of fake news. Considering such studies there seems to be a need to revisit certain assumptions that the JPC seems to make. T
he approach of the JPC seems to be to provide a stop gap solution to online harms of disinformation, rather than providing a systemic remedy to the malady. To do that, there is a need to undertake an exercise similar to that undertaken by the United Kingdom. The UK had, in 2019, published a White Paper on Online Harms that undertakes a comprehensive survey of the harms that arise from users’ interaction online and formulates a detailed regulatory response while articulating clearing the duty of care on online entities. Lastly, there is a need to assess whether the proposed Data Protection Authority, with everything else that it supposed to do, have enough regulatory and enforcement capacity to be able to take care of issues that do not strictly fall within the purview of data protection.
The regulation of SMIs and their attendant online harms requires greater deliberation. Its inclusion in a data protection law provides it with limited context that inhibits the development of a nuanced regulatory approach.
Trishee Goyal is a project fellow at at the Centre for Applied Law and Technology Research, Vidhi Centre for Legal Policy.
Vidhispeaks is a fortnightly column on law and policy curated by Vidhi. The views expressed are of the fellow and do not reflect the views of Vidhi or Bar & Bench.