Does social media increase hatred among people? A study in “Nature” identifies when it happens and why

Have you ever come across a slew of hateful or negative comments about social networks under a news story? Statistically it is very likely. Here, a recent one study published on Nature explains how they take hold and shape the hateful attitudes online.

For several years now, there have been lines of research on topics such as polarizationthe disinformation hey antisocial behaviors online. However, to undertake research of this type, we often do not have large enough databases. And even when researchers obtain data through special deals with companies like Meta, it's difficult to determine whether the behaviors that occur on social media are “intrinsic” to people or whether they are due more to how the platforms are designed.

Thanks to a comparative analysis, the study on Nature However, he managed to investigate how, on different platforms (Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat and YouTube), i behaviors defined toxic they arise and spread in a similar and coherent way.

don't miss this article
Image

How easy is it to change your mind? Social influence in groups

When social media increases hatred: what the study says Nature

Among the lines of research on online hatred and violence we find the one dedicated to the examination of “harmful” language on social media and his effects, even offline. Since the design and algorithms of these platforms are aimed at maximizing user engagement, it becomes difficult to distinguish how much online hate speech is due to the user's personality and how much it is the platform itself that leads them to adopt a negative interaction mode. This examination is crucial, as it reveals how social media can reflect and, in some cases, amplify social issues, including worsening public discourse.

negative social comments

To get a complete picture of online conversations on social media, researchers analyzed approx 500 million comments from Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat and YouTube, on different topics and in thirty years of activity. They also considered the definition of toxic comment provided byAPI Perspective by Google (a toxicity classifier):

A rude, disrespectful, or unreasonable comment that may cause people to leave a discussion

Based on this definition, the API assigns a text a toxicity score growing included between 0 and 1indicating the likelihood that a reader will perceive the comment as toxic. 0.6 and the theshold to consider a comment toxic.

The first significant data found is the following: the number of users who take part in an online conversation tends to decrease the longer the conversation becomes and evolves, but those who remain do so more actively. The study therefore investigates how the duration of conversations is related to the likelihood of encountering toxic comments. In this sense, the resulting trends are almost all increasing, demonstrating that regardless of platform and topic, the longer the conversation the more toxic it tends to be.

It was also discovered that individuals they do not avoid “a priori” the online environments in which a controversy could arise: the percentages of abandonment of a conversation have an almost identical trend both in cases in which a hateful comment emerges and when it does not exist.

Why people participate in “hate” conversations on social media: the reasons

Ultimately, the study explores why people engage in toxic online conversations and why longer discussions tend to be more toxic. Here they are reasons:

  • Presence of controversial topics: When a controversy emerges between people with opposing views, debates become longer and more heated, and more toxicity emerges. This happens, for example, when users with different political leanings converse with each other.
  • Presence of peaks of involvement: factors such as reducing the focus of the discussion or the intervention of so-called “trolls” (people who intervene with the sole purpose of inflaming the debate) can lead to a greater share of toxic exchanges.

  • Lack of nonverbal cues and physical presence: Compared to face-to-face interactions, the perception of the screen acts like a shield. Not only that, it allows us to leave the conversation much more easily than in person, reducing our sense of responsibility in the conversation.
  • Echo chamber formation: Our opinions, both on the internet and in everyday life, are influenced by our pre-existing beliefs, which is why we tend to seek and accept information that supports our ideas, ignoring or excluding other perspectives that are contrary to ours.

The study then disproves the widespread opinion according to which, if toxic and rude comments receive many “like” or they are not limited by the so-called dei figures moderators they introduce hateful behavior into the conversation and then users will start to emulate it. I study it on Naturedemonstrated that there is no evidence who can support this position.

Ultimately, the researchers believe that monitoring possible polarization among users could be useful for imagining early interventions in online discussions, before these lead to hate speech. However, they recognize that it is important not to forget about other dynamics that give life to online discourse and which require a separate discussion (the presence of influencerstrolls, cultural and demographic aspects, geographical area and so on).

Bibliography
Avalle, M., Di Marco, N., Etta, G. et al. Persistent interaction patterns across social media platforms and over time. Nature (2024)