![]() ![]() In addition to these two approaches, there also seems to be a popular assumption, evidenced in online comments and in more mainstream literature, that hate speech is the natural product of hateful people. As well as the hazards of the content itself, employees are often under intense pressure to meet performance targets, an anxiety that only adds to the inherent psychological toll (Newton, 2019). In being forced to parse this material, workers “do not escape unscathed” (Madrigal, 2017). However, the toll for those carrying out this kind of work, where hate speech, graphic images, and racist epithets must be carefully reviewed, is incredibly high, leading to depression and other mental health issues. In May 2018, for example, Facebook announced that it would be hiring 10,000 new workers into it’s trust and safety team (Freeman, 2018). ![]() The response is to dramatically expand content moderation teams. This framing, not incorrectly, points out that automated interventions will always be inherently limited, unable to account for the nuances of particular contexts and the complexities of language. The second approach is non-technical, stressing that hate speech online is a problem that only humans can address. Technical understanding of this content will inevitably be limited, explains researcher Robyn Caplan (quoted in Vincent, 2019), because automated systems are being asked to understand human culture-racial histories, gender relations, power dynamics and so on-“a phenomenon too fluid and subtle to be described in simple, machine-readable rules”. And yet the inventiveness of users and the ambiguity of language mean that toxic communication remains complex and difficult to address. Articles in computer science and software engineering in particular often claim to have studied the failings of previous techniques and discovered a new method that finally solves the issue (Delort et al., 2011 Mulla and Palave, 2016 Tulkens et al., 2016). Indeed over the last few years in particular, significant attention has been directed at abusive speech online, with huge amounts of work poured into constructing and improving automated systems (Pavlopoulos et al., 2017 Fortuna and Nunes, 2018). The first approach is technical, attempting to develop software models to detect and remove problematic content. The response to this rise has broadly taken two approaches to harm reduction on platforms. Hate speech online is on the rise (Oboler, 2016 Perrigo, 2019 Pachego and Melhuish, 2020) Footnote 1. ![]()
0 Comments
Leave a Reply. |