As part of the series organized within the scope of the project “Utilizing Digital Technology for Social Cohesion, Positive Messaging and Peace by Boosting Collaboration, Exchange and Solidarity”, the first panel titled “Combating Hate Speech and Disinformation against Social Polarization” took place on Friday, October 7, 2022 at the Hrant Dink Foundation’s Havak Hall. The panel was broadcasted live on the foundation Youtube account in English, Arabic and Turkish
The panel was moderated by Handan Uslu, the founder of Gözlemevi, Turkey’s first internet monitoring organization, and the panelists included Kareem Darwish from aiXplain Inc, Stefanie Ullmann from the Center for Research in the Arts, Social Sciences and Humanities, and Alex Mahadevan, founding director of MediaWise.
While talking about social polarization and giving different examples, Kareem Darwish noted that the spread of misinformation and hate speech can result in crimes such as genocide. Darwish stated that polarization can be measured through the communication levels of the groups in society with each other and there are two different methods in quantifying this data, opinion detection and in and out group bias. He added that there are groupings called ‘echo chambers’ in social media similar to the differentiation in the real life, and these differentiations can feed discriminatory speech and hate speech. Darwish said that in order to measure polarization, first, different groups should be identified, and then, it should be examined to what extent these groups diverge. Darwish said that while making stance/opinion detection, tens of thousands of people's views and feelings about a subject or event are examined rather than the views of a certain group or person, and he mentioned that the most reasonable method to do this is 'supervised learning.' He stated that the words used by users, hashtags, retweets, and accounts they follow can be used to detect opinions in supervised learning. He noted that stance detection allows to go beyond what people say and that fully supervised methods provide up to 90 percent correct results. On the other hand, semi-supervised methods provide over 95 percent accuracy. In unsupervised learning, Darwish said that people from different groups are clustered by using certain denominators (like accounts they retweet).
Stefanie Ullmann began her speech by mentioning that online hate speech and misinformation are some of the biggest challenges to democracy and peace. Noting that online hate speech triggers violence in real life and brings psychological violence with it, Ullmann showed that the number of hate crimes has also increased. She mentioned that although the current approaches of social media companies are based on content moderation, there is a necessity for the person targeted with hate speech to report that content. She also added that censorship brings ethical issues. Ullmann said that proposed techniques based on artificial intelligence try to move users away from the dangerous field to more positive areas or to direct their attention to different places. Sharing various examples to show that there are different methods of producing counter speech, she emphasized that the questions of what should be the aim of the counter speech, who developed this speech, and the identity of the person who developed it are significant. In the last part of her presentation, Ullmann showed examples from natural language processing, sentiment analysis, and network analysis initiatives related to automatic counter speech, and the problems met in generating automatic counter speech.
Alex Mahadevan, the director of MediaWise, digital literacy organization, started his speech by noting that verification methods are useful along with the fact that social media platforms spread disinformation and hate speech. Mahadevan stated that in the MediaWise, they aim to increase people’s awareness so they can fact-check the news they encounter with the media literacy method for the target groups. He explained with examples that manipulated facts can be produced with real media tools and the data can be generalized incorrectly. Stating that MediaWise especially teaches the youth to do their own fact-checks, Mahadevan mentioned a method called 'lateral reading’. He said that with this method, accuracy can be checked by reading from different sources at the same time and in the directions where the source and conversations are located. With the method called 'peer tutoring' he said that youth reached out to their peers and shared their fact-checking methods.

This project is financed by the European Union.