As part of the series organized within the scope of the project “Utilizing Digital Technology for Social Cohesion, Positive Messaging and Peace by Boosting Collaboration, Exchange and Solidarity”, the fourth panel titled “Algorithmic Design in the Context of Human Rights'' took place on Friday, November 10, 2023 at the Hrant Dink Foundation’s Havak Hall. The panel was broadcast live on the Foundation's Youtube account in English and Turkish.

In the first session of the panel, titled 'Designing Reliable Algorithms', moderated by Melis Öneren Özbek, Tanu Mitra and Gözde Gül Şahin talked about how reliable algorithms can be designed, while Sarah Eagan discussed the impact of changes on Twitter/X platform on algorithms.

First Session: Designing Reliable Algorithms
Speakers: Sarah Eagan, Gözde Gül Şahin, Tanu Mitra
Moderator: Melis Öneren Özbek

In the first session titled ‘Designing Reliable Algorithms’, Sarah Eagan from the Center for Countering Digital Hate (CCDH) began her speech by addressing the role of social media companies in the spread of hate speech and misinformation. Referring to CCDH's research on the recommendation algorithms of Instagram and TikTok, Eagan mentioned that they found platforms tend to recommend more harmful content to users they deem vulnerable. She highlighted the increase in discriminatory and aggressive expressions on the platform due to reasons such as Twitter restricting academic access, verified accounts losing their status, and the return of previously banned users after Musk's acquisition.

Gözde Gül Şahin from Koç University emphasized the need for large training datasets for proper training of language models. She explained how different models are trained and ranked over time, highlighting that the best version of models can be created through examples. She pointed out that these models are not created to generate unbiased information, and Artificial Intelligence tools can be conditioned to produce biased and toxic information that supports the misuse by individuals. Şahin noted the absence of a strategy to solve these problems and concluded her speech by emphasizing the importance of users having the necessary knowledge to determine whether Artificial Intelligence tools produce incorrect or harmful information.

Tanu Mitra from the University of Washington discussed the work of the Social Computing research team on Algorithmic Governance. In two studies focusing on YouTube and Amazon, they audited the search and recommendation algorithms of the platforms to identify their roles in the spread of misinformation. The research revealed that YouTube tends to recommend more conspiracy videos for accounts with a watch history containing pro/debunking/neutral conspiracy-themed videos, which is a way of putting users in echo chambers. Mitra advised that traditional recommendation algorithms should not be applied without questioning for all topics and emphasized the importance of conducting audits to regulate social media companies. She mentioned various types of audits, including external audits, where companies are held accountable to a third party and cooperative audits, where companies are forced to share and edit algorithms with outside collaborators. Mitra concluded her speech by stating that real-world changes can be achieved through audits that examine social media companies and hold them accountable for potentially harmful algorithmic designs, encouraging them to fulfill their responsibilities to users.

In the second session of the panel titled ‘The Relationship Between Data and Algorithms in the Context of Human Rights’, moderated by Selin Çetin; Shmyla Khan, Lisa Ginsborg and Gökhan Ahi evaluated how data is processed in the design of algorithms within the context of human rights and in relation to the legal framework.

Second Session: The Relationship Between Data and Algorithms in the Context of Human Rights
Speakers: Gökhan Ahi, Shmyla Khan, Lisa Ginsborg
Moderator: Selin Çetin

Gökhan Ahi, the first speaker of the second session, titled ‘The Relationship Between Data and Algorithms in the Context of Human Rights,’ evaluated the consequences of rising populism in relation to the practices of social media companies. He emphasized how algorithms used on social media platforms contribute to echo chambers and feed populism with false and incomplete information. He highlighted that false and incomplete information spread through algorithms propagates discriminatory discourse much faster than accurate information. Despite regulations implemented in different countries to address this issue, he noted that these regulations are not sufficient, and the design of algorithms should prioritize pluralism and international collaborations to prevent such issues.

The second speaker of the session, Shmyla Khan, began her speech by highlighting that artificial intelligence algorithms are produced with low-waged labor in the Global South but developed in the Global North. She pointed out that the production of artificial intelligence technologies in this manner leads to algorithmic discrimination affecting marginalized communities, and the regulations are inadequate for the rest of the world since they are primarily prepared with a focus on the Global North. Khan continued her speech by stating that artificial intelligence consists of data and human experience. She discussed the extensive literature indicating that algorithms are not neutral and, in fact, artificial intelligence reinforces gender or race-based discrimination. Despite regulations like the EU Artificial Intelligence Act, she mentioned that the content is limited, and the risks specified in the law do not cover ethnic and gender-based discrimination. She concluded her speech by stating that regulations should focus on human rights and policymaking should be inclusive and collaborative.

Lisa Ginsborg from the European Digital Media Observatory (EDMO) began her speech by discussing EDMO's efforts to increase societal resistance against misinformation through journalism, media literacy, fact-checking, and research around these concepts. She emphasized that social media companies' algorithms should not be mysterious, and the transparency of what algorithms are designed for should be shared openly. Ginsborg mentioned that transparency reports under the Digital Services Act (DSA) pertain to companies' content moderation policies, and the transparency in these reports is limited, with companies tending to hide information about their decision-making processes. Similarly, she stated that according to the recent legislation within the DSA, public data should be made accessible by social media companies, but companies actively prevent this access through restrictions on APIs.

 

 

This project is financed by the European Union.