Contact

Hate speech

Online communication platforms have become a lifeline for millions of people affected by natural disasters and armed conflicts: social media and messaging apps help them to keep in touch with family and friends and to access information, e.g. about where to find food, shelter or medical assistance. This information can directly influence how people prepare for, respond to and recover from different types of crises.

With growing numbers of people connected online, these platforms have become a vital communication channel for people affected by crisis and the organizations trying to reach them. However, a constant flow of instantaneous, unfiltered exchanges also opens up the possibility that this information may be weaponized – in other words, that it may be exploited, either deliberately or unwittingly, in a way that provokes, misleads or influences the public, often with dangerous, polarizing consequences.

One of the most worrying trends in this context is the growing presence of hate speech on social media platforms, especially during crises or other politically or socially tense situations.
In digital hate speech, intolerance typically fuels the generation and spreading of hate-filled narratives which are then amplified through online channels. This messaging reverberates through analogue and digital communication systems alike, and has the special knack of heightening group tensions and triggering violence against members of other groups. The rise in attacks on immigrants and other minorities has raised fresh concerns about the links between inflammatory speech online and violent acts. Incidents can cause or contribute to emotional, psychological, social, material and even physical harm to people. They have been reported on nearly every continent.

The use of digital tools to distort facts and spread incendiary rhetoric is having a strong impact on crises and conflicts, heightening social vulnerabilities in novel and unpredictable ways. Even before the digital transformation, communications technology (e.g. print, radio and television) was a well-established driver of violence. Recent history provides some horrific examples of how propaganda and hate speech have been leveraged to deadly effect, the Rwandan and Nazi genocides being the two most notorious examples. In the digital domain, however, they can develop even more quickly, beyond the reach of those who have traditionally lessened the potential harm of information-borne threats.

The same technology that allows social media to galvanize pro-democracy activists can be used by hate groups seeking to organize and recruit. It also allows fringe sites, including those that fuel conspiracy theories and encourage discrimination, to reach audiences far wider than their core readership.
Given the rapid development of digital information technology and its evolving potential for exacerbating and accelerating conflict dynamics, violence and harm, this is an important area of concern.

That is why the Red Cross and Red Crescent Movement is working to understand hate speech on social media better, to improve its staff’s ability to recognize it, and to find ways to deal with it, in order to ensure that humanitarian principles are protected in the digital age.

Resources
Delphine van Solinge, Digital risks for populations in armed conflict: Five key gaps the humanitarian sector should address (12.06.2019)
ICRC, Symposium on digital risks in situations of armed conflict (January 2019)
ICRC, IFRC and UNOCHA, How to use social media to engage better with people affected by crisis (October 2017)

Community Engagement and Accountability Toolkit