Regulating content in the age of social media

Who should decide what is and is not acceptable social media content?
Regulating content in the age of social media

Back in 2018, Facebook’s highest executives met every other Tuesday to discuss the problems of misinformation, hate speech, and other disturbing content spread across their platforms. These discussions resulted in the establishment of the Content Standards Forum run by Facebook’s lower level administrators.

The working group at this forum considers material on Facebook’s platform and formulates guidelines on, say a video showing a Hindu woman in Bangladesh being relentlessly beaten (taken down as it may entice communal violence) or a photo of police brutality during riots on race (leave it up for reasons of public awareness). Ultimately, these directives trickle down to thousands of content moderators across the globe.

Almost every social media company today has similar protocols on content review. Learning about how social media companies moderate their content is certainly encouraging. Thoughtful consideration is necessary to ensure that these platforms are less toxic in their content without stifling the constitutionally guaranteed freedom of speech and expression. That they do it at all is cause for concern: these companies today are the ‘arbiters of truth’ for a massive audience. At no point in history have so few controlled what so many hear and see.

Earlier this year, the Delhi Legislative Assembly decided Facebook’s content moderation process was not enough. In the wake of the communal riots in February, the government established a Peace and Harmony Committee to investigate allegations that that the social media giant did not appropriately apply hate speech rules and policies, thus contributing to the violence that engulfed India’s capital.

Ajit Mohan, Head of Facebook India, was summoned to testify and ultimately refused. When the Delhi government threatened to prosecute him for defying the summons, the Supreme Court of India suspended the investigation till further orders. This saga has brought a thorny question back into the spotlight: who should be legally responsible for all this content?

Also Read
[Role of FB in Delhi Riots] SC hears plea by Facebook India Chief Ajit Mohan against Delhi Govt panel summons: LIVE UPDATES

Naturally, everyone agrees that the person posting the content should be responsible for his actions. The Indian Penal Code has provisions that, among others, prevent hate speech (Section 153A), obscenity (Section 292), and defamation (Section 499). That we require our citizenry to act responsibly is embedded in our criminal jurisprudence. Generally, though, prosecuting the individuals who post offending content is difficult and time-consuming. For this reason, policymakers have long argued that intermediaries must share the burden of content regulation.

Should these intermediaries legally responsible for the content they host? Section 79(2) of the Information Technology Act has so far protected social media companies from any potential liability for the content on their platforms. A decade ago, when this law was passed, such an approach may have been reasonable. At the time, Instagram and Whatsapp didn’t exist, and Facebook barely had 10 crore users. Today, all 3 platforms are the central place for social interaction and all forms of expression, from informed debate and dog pictures to conspiracy theories and hate speech.

Overtime, rights activists have developed well-rehearsed arguments to remove online firms’ protection from legal jeopardy. In 2017, seven men in Jharkhand were brutally lynched by irate villagers after rumours went viral on WhatsApp warning of kidnappers in the area. In a gory finale, photos and videos from the lynching were put into circulation.

Meanwhile, in Myanmar, human-rights activists established that fake news on Facebook was directly linked to violence against Muslim Rohyingas. Recently, Twitter and Facebook have faced a series of accusations that there content moderators are suppressing conservative views while promoting liberal content, thus, suffocating the free exchange of ideas.

Naturally, propagandists too have deployed social media as a potent tool. In 2018, Snigdha Poonam published her book Dreamers: How Young Indians Are Changing the World, in which she reports visiting a political party’s “social media war room”. Employees spent hours “packaging as many insults as possible into one WhatsApp message”, which would then be sent out to party members for dissemination to their personal networks.

This change in the narrative signals the receding legal protection afforded to these companies. In December 2018, the Indian Information Technology Ministry prepared the IT Intermediaries Guidelines (Amendment) Rules, which raises the standard of responsibility expected of social media companies in regulating their content.

Chillingly, the 2018 Rules require intermediaries to provide any information or assistance to the government touching on matters of cyber security. In February, the Delhi Government indicated its intention to bring this law into force in the very near future. When it does take effect, courts will doubtless be confronted on its impact to user’s privacy and associated rights.

Globally, legal scrutiny of these companies is intensifying. In 2019, Australia adopted a draconian law which exposes social media companies that fail to remove offensive material with fines of up to a tenth of their annual turnover. In the same year, Singapore passed a law that legally obligated all online firms to label news that the Singapore government deems false as fake. In April of this year, the British government unveiled a 100-page policy paper on content regulation with similar provisions in place.

All that said, the larger question remains: who should decide what is and is not acceptable social media content? Stakeholders first turned to the companies themselves and called for better and faster self-regulation. This is why Facebook has an army of moderators that decide what is appropriate content.

This is problematic for two reasons. First, a company’s legally defined mandate is to relentlessly pursue its own self-interest. But for the marginal 2% of profits spent on corporate social responsibility, the law does not require companies to act in the larger public interest. Second, it is not for private organizations to define what is and is not legally acceptable content. Laws are passed by Parliament and interpreted by courts, not by a boardroom full of executives.

On January 28 this year, Facebook published draft bylaws of what it calls an ‘oversight board’: an impartial group of experts tasked with the duty to review its content moderators’ decisions. In May, its first set of judges were appointed. Critics see this institution as no more than a fig leaf. Facebook’s latest attempt to delay real regulation and shrug of responsibility for its controversial content. Will governments consider such an entity legitimate or simply at odds with the its efforts to regulate online content? Whatever be the result, that enforceable laws are necessary should no longer be in doubt.

The author is a practicing lawyer at Gurgaon based law firm, N South Advocates.

Bar and Bench - Indian Legal news
www.barandbench.com