ONLINE HATE SPEECH- PERVASIVE DISCRIMINATION AND HUMILIATION ON SOCIAL MEDIA

THE CONTEXT: Efforts in the fight against “the tsunami of hate and xenophobia in social media” appear to be largely failing, because hate is increasing, not diminishing. In this context, attempts are being made to control it by moral suasion, voluntary controls by the regulators but the attempts are largely failing, sometimes due to vested interest, other time due to recognition of such issues.

THE ISSUE: In an unequal society, hate speech has developed out of unequal power relations, which determine one’s ‘vulnerability’ to extreme forms of discrimination. Hate speech is inflicted based on religion, gender, sexuality, disability, nationality, race, and caste. The tangible presence of hate speech can have the effect of silencing exactly those at the forefront of expressing dissent against that hate speech. But when it is done, offline, there are various mechanisms to prevent it. But, when it is done on digital platforms where the “ sense of control is missing”  generally out of regulatory lacuna problem arises. Further, it can condescend into real-life violence which was witnessed in the events like:

  • Capitol Hill violence
  • Frequent trolling of influential persons
  • Caste-based hate speech
  • Gender-based hate speech

WHAT IS CYBERHATE?

Cyberhate can be defined as the use of violent, aggressive or offensive language, focused on a specific group of people who share a common property, which can be religion, race, gender or sex or political affiliation through the use of the Internet and Social Networks, based on a power imbalance, carried out systematically and uncontrollably, through digital media and often motivated by ideologies to which individuals and groups adhere, deriving in behaviours that can be considered as acts of deviant communication as they may violate shared cultural standards, rules or norms of social interaction in group contexts.

REASONS FOR CYBERHATE

  • Anonymity: One of the supposed advantages of the Internet as a medium for communication is that people are not compelled to reveal aspects of their offline identity unless they wish to do so. It has been suggested that the anonymity of the Internet can provide opportunities for freer speech because people can say what they think without fear that other people will react or respond unfavourably simply because of the colour of their skin, their sexual orientation, or even their gender identity
  • The perceived anonymity of the Internet may remove the fear of being held accountable for cyberhate and may also evince a sense that the normal rules of conduct do not apply; the associated feeling of liberation may drive people to give in to their worst tendencies
  • Invisibility A second potentially distinctive feature of online hate speech is that there can be a physical distance between speaker and audience, meaning that the speaker can be non-visible or in some sense invisible to the audience and vice versa.
  • Community There is always people’s innate desire) to engage with like-minded others allied to the power of the Internet to put people in touch with each other—people who otherwise might be unable to connect due to geography or people who might be simply ignorant of each other’s existence
  • In that sense, online hate speech is different in one sense simply because it has become the method of choice among hate groups for cementing in-group statuses and fermenting a sense of intra-group community. Of course, this fact itself also relies on some other distinctive features of the Internet. One feature is that the Internet is relatively cheap and easy to use compared to other comparable means of communication
  • Instantaneousness: On the Internet, the time delay between having a thought or feeling and expressing it to a particular individual who is located a long distance away, or to a group of like-minded people or to a mass audience can be a matter of seconds.

THE ORIGIN OF INTERNET TROLLING

HARM

  • Because the Internet allows cheap access to mass communication and easy transmission of words, images, music and videos, it has a tendency to support and encourage ingenuity, creativity, playfulness, and innovation in such content
  • The same applies to hate speech. Online hate speech is heterogeneous and dynamic: it takes many different forms, and those forms can shift and expand over relatively short spaces of time
  • The Internet is home to forms of hate speech that are banned by existing hate speech laws in India, including the stirring up of hatred toward people based on certain protected characteristics and certain public order and harassment offences aggravated by hostility toward people based on certain protected characteristics.
  • But the Internet is also home to hate speech that is not directly banned by existing hate speech laws in India including forms of negative stereotyping, vilification, group defamation.

CURRENT LEGAL PROVISIONS TO DEAL WITH HATE SPEECH

  • Not defined in the legal framework: Hate speech is neither defined in the Indian legal framework nor can it be easily reduced to a standard definition due to the myriad forms it can take.
  • The Supreme Court, in Pravasi Bhalai Sangathan v. Union of India (2014), described hate speech as “an effort to marginalise individuals based on their membership in a group” and one that “seeks to delegitimise group members in the eyes of the majority, reducing their social standing and acceptance within society.”
  • The Indian Penal Code illegalises speeches that are intended to promote enmity or prejudice the maintenance of harmony between different classes.
  • Specifically, sections of the IPC, such as 153A, which penalises promotion of enmity between different groups;
  • 153B, which punishes imputations, assertions prejudicial to national integration;
  • 505, which punishes rumours and news intended to promote communal enmity, and
  • 295A, which criminalises insults to the religious beliefs of a class by words with deliberate or malicious intention.
  • Summing up various legal principles, in Amish Devgan v. Union of India (2020), the Supreme Court held that “hate speech has no redeeming or legitimate purpose other than hatred towards a particular group”.
  • Lack of established legal standard: Divergent decisions from constitutional courts expose the lack of established legal standards in defining hate speech, especially those propagated via the digital medium.

From the private side

YouTube included caste policy in 2019

PROBLEMS IN CONTROLLING ONLINE HATE SPEECH

  • Absolute free speech laws that protect against any type of censorship inadvertently render protection to hate speech as well. In India, hate speech is not profusely restricted, it remains undefined with appropriate IT Act provisions or a regulatory mechanism for online content. Absent appropriate codes or regulations for intermediaries, those who tend to have a louder voice—such as politicians or celebrities—can harness this capacity to incite anger or divide communities without being threatened by any form of liability. But, overcriminalisation can have a problem, as it will have a chilling impact on free speech.
  • Both government authorities and social media platforms alike, have been criticised for their failure to secure data and effectively regulate content. Many platforms, experts, and politicians have welcomed a government-led moderation of illicit content, with ample checks and balances against arbitrary imposition.
  • Human rights groups and activists express scepticism against allowing any avenue for governmental intervention through either the arbitrary imposition of bans, content moderation or internet shutdowns. Another paradigm champions the principle of “self-regulation”—where the platform itself adjudicates on their user-policy and community guidelines. Self-regulation has largely been ineffective in preventing abuse of the platform and has garnered criticism in various democracies
  • The difficult question concerning hate speech or fake news legislation pertains to the existing ethical-legal gap, the executive response departing from the conservative understanding of online spaces and data. While disruptive technologies are evolving at a faster rate, the regulations fail to address gaps to deter unethical behaviour. The platforms alone are not equipped to oversee the task for a remodelled approach to counter manipulation and hate speech. Due to the overarching jurisdictional nature of these acts and easy multiplication, taking down content is not a silver bullet in countering hate speech and fake news.

THE STRUCTURAL PROBLEM

  • The overregulation vs. under-regulation debate tends to overshadow the deeper and more inherent structural problems in the tech platforms themselves. The platform structure is driven by exploiting the disparities of wealth and power, as algorithms reward virality and interactions for monetary gains, even though they might be “divisive, misleading or false”.
  • Platforms are also known to amplify certain types of users and content over others. Platforms decentralise free speech, but “special” megaphones are provided to sensationalist ideologies or popular content. Its algorithmic nature creates and perpetuates an information divide, alienating communities with different subscriptions through echo chambers and information silos.
  • This has become obvious with the platform’s incentive structure, which is driven by monetisation of user data, advertisement money, and constant engagement. For example, a few popular YouTube channels that earlier achieved “Creator Award” were inciting violence including rape but suffered fewer takedowns. Platforms conveniently hide behind the garb of free speech enablers, with little responsibility, if at all.
  • Even as xenophobia, communication and racism have long existed in the real world, the susceptibility of social media platforms to misuse has magnified such ill-speech at a faster pace.

THE BEST PRACTICES AROUND THE WORLD

WHAT SHOULD BE THE INDIA APPROACH?

  • Institute an independent regulator to oversee compliance with fake news and hate speech codes that will be adopted;
  • Proportional, necessary and minimal interventions from the government and platforms with effective and consistent application of their duties;
  • An inclusive and ethical Code of Conduct developed in consultation with all stakeholders to realign the platform’s fiscal-driven-incentives with the public interest;
  • Democratic application of penal and non-penal standards of existing laws; Periodic review policies to improve effectiveness;
  • Encourage transparency by commissioning open-source research with periodic reports from regulators, platforms, civil society organisations and academia;
  • Avoid creating any barriers or strengthening any dominant positions by large incumbents;
  • Promote digital education initiatives and workshops to acquire necessary skills from a young age;
  • Redressal and appellate mechanisms to provide support to any wrongful application of standards, take-downs or breach.
  • There should be continuous collaborative engagements within the industry, along with state and non-state actors.
  • While the creation of charters or codes that define each stakeholder’s duties and rights will be a lengthy process, a pre-emptive plan cannot be delayed further.
  • This can enable the creation of voluntary multi-platform and multi-stakeholder initiatives. The Code of ethics and voluntary audits are other welcome by-products of these collaborative measures. Issue-specific methods of advertisement rules for transparency and media guidelines or ethical codes also aim to strengthen industry standards.
  • Some shared responsibilities between the stakeholders have already been outlined but limited action has been taken to counter online harm.
  • Platforms have deployed minimal resources to take down blatantly illegal content, as they lack real-time local responders who are well-versed in Indian languages.
  • Even their community guidelines are globally uniform and limited due to implementational and definitional challenges locally. Therefore, the government and the tech platforms should complement other information gatekeepers like media and politicians.

THE CONCLUSION: Hate speech is provocative and divisive, and in extreme scenarios where it has remained unchecked, has been responsible for terrorism and genocide. With newer tools to weaponise and sensationalise enmity, it must not be protected under the realm of free speech doctrine. Similarly, misinformation (“fake news”) also has the potential to affect human safety and public health, and instigate violence. If fake news and hate speech continue to proliferate at the current rate, they pose threats to the democratic ecosystem. India must work to devise an all-stakeholder model to counter the weaponisation of online content before it further widens societal faultlines.

Spread the Word