As part of the series organized within the scope of the project “Utilizing Digital Technology for Social Cohesion, Positive Messaging and Peace by Boosting Collaboration, Exchange and Solidarity”, the sixth panel titled ‘Alternative Approaches to Combat Hate Speech and Discrimination’ took place on Tuesday, September 10, 2024, at the Hrant Dink Foundation’s Havak Hall. The panel was live-streamed on the Foundation’s YouTube account in English and Turkish. The panel was moderated by Sarper Durmuş from Istanbul Bilgi University, and the speakers were Nazar Akrami from Uppsala University, Morgane Bonvallat from the Public Discourse Foundation, and Katharina Klappheck from Gunda Werner Institute.
In his opening address, Aret Demirci, the Turkey Office Director of Friedrich Naumann Foundation, emphasized the importance of the opportunities provided by artificial intelligence in combating hate speech and discrimination. Highlighting that the impact of online hate speech in the digital realm also reflects in daily life, Demirci concluded his speech by stating that the capabilities of AI technologies in detecting and classifying hate speech will be among the most effective methods in fighting discrimination.
The moderator of the panel Sarper Durmuş noted that hate speech and discrimination are on the rise both in the digital sphere and in daily life, emphasizing the increasing importance of hate speech studies in this atmosphere. He referred to recent examples from the UK and the Paris Olympics regarding the rapid spread of online hate speech and disinformation through social media platforms. Highlighting the tension between freedom of expression and hate speech, Durmuş stressed the need for alternative methods in combating discrimination and then introduced the first speaker, Nazar Akrami.
Nazar Akrami began his speech by stating that in his studies in Sweden, he uses the concept of ‘toxic language’, which is considered more accessible instead of ‘hate speech’ which is a legal definition in the country. Akrami talked about text classification studies conducted to understand toxic language and touched on the work of the ‘European Hate Lab’, which was established to examine toxic language more closely, especially in minor languages such as Swedish, which social media companies have not worked on enough. Akrami, who researches prejudices against different groups and how these prejudices are expressed on online platforms, stated that certain factors such as the absence of norms (anomie) and anonymity affect people’s use of toxic language. He emphasized that applications that create a self-control mechanism by informing the user of the level of toxicity in their texts and the introduction of community norms can reduce the use of toxic language.
Morgane Bonvallat began her speech by explaining the work of the Public Discourse Foundation, which aims to make online spaces safe and to ensure the active and free participation of individuals in these spaces. Bonvallat expressed that they aim to make data and research on hate speech more accessible. She gave examples from the experiences of Swiss female politicians and journalists and mentioned that the foundation also supports individuals and groups that are targets of hate speech and toxic language. She stated that in a study examining 8000 posts in Swiss German, French and English obtained from a social media platform, they observed that 50-70% of hateful posts were shared by 1% of users. Bonvallat underlined that this data points to an opportunity and that hate speech can be combated through the production of empathy-based counter-discourse in online spaces.
Finally, Katharina Klappheck spoke about the activities of the Gunda Werner Institute and their approaches that classify hate speech as a cybersecurity problem. They emphasized the necessity of a feminist perspective to make online spaces safe and explained that cybersecurity gaps directly affect people's security and democratic values. They gave examples from the experiences of German female politicians. Klappheck mentioned that they take a multi-layered approach in combating hate speech and emphasized that hate speech is not a singular issue but a social problem that poses a threat to disadvantaged groups and democracy. Stating that content moderation plays a very important role in the fight against hate speech, Klappheck conveyed that improving the working conditions of moderators and clarifying their job descriptions would be an important step in this area. They concluded their speech by mentioning good practices from Brazil and Germany and reiterated the importance of studies that prioritize security, privacy and active participation.
In the Q&A section of the panel, the possible areas of application of the mentioned approaches in Turkey, the role of social media companies and governmental policies in the fight against hate speech, and the importance of content moderation were discussed. Additionally, this section focused on user profiles that produce hate speech in online areas. It was stated that detection tools that take into account differences in context and language would be effective in combating hate speech.
This project is financed by the European Union.