The paper presented is an analysis of the Hate Speech of tweets during the implementation of the EU's Digital Covid Certificate policy. The work starts from the assumption that Hate Speech is an often "submerged" phenomenon because it also includes some forms recognized as "incivility." Therefore, there are two research questions: the first asks what are the new categories of "hate" that emerge in the EU Digital Covid Certificate policy debate, while the second questions the methodological implications on the use of algorithms in detecting the phenomenon. The results we arrived at are, from a substantive point of view, of good interest because they show us how it is possible to witness a new kind of online hatred. However, the disagreements we encountered in constructing an unambiguous definition of HS for the supervised algorithm leave open many questions. Among them is the fact that the differences between HS, incivility, and even freedom of expression can be very small. In the context of large social platforms, where the criteria of the algorithm are not always explicit and are also the policies of the platform, this could be a problem

Platformization hate. Patterns and algorithmic bias of verbal violence on social media

Domenico Trezza
2022-01-01

Abstract

The paper presented is an analysis of the Hate Speech of tweets during the implementation of the EU's Digital Covid Certificate policy. The work starts from the assumption that Hate Speech is an often "submerged" phenomenon because it also includes some forms recognized as "incivility." Therefore, there are two research questions: the first asks what are the new categories of "hate" that emerge in the EU Digital Covid Certificate policy debate, while the second questions the methodological implications on the use of algorithms in detecting the phenomenon. The results we arrived at are, from a substantive point of view, of good interest because they show us how it is possible to witness a new kind of online hatred. However, the disagreements we encountered in constructing an unambiguous definition of HS for the supervised algorithm leave open many questions. Among them is the fact that the differences between HS, incivility, and even freedom of expression can be very small. In the context of large social platforms, where the criteria of the algorithm are not always explicit and are also the policies of the platform, this could be a problem
2022
Hate speech
Algorithms
Social Media
Digital Methods
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12607/65661
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact