September, Friday 20, 2024

How Facebook's algorithms amplified hate speech during Ethiopia's Tigray conflict


A7cZvAqFmOWLrFm.png

Amnesty International has accused Facebook of playing a role in the violence that occurred during the two-year conflict in Ethiopia's Tigray region. According to Amnesty's report, the social media platform's algorithms amplified the spread of harmful rhetoric and Facebook failed to take adequate measures to stop it. Facebook's parent company, Meta, has denied these allegations in the past, stating that it has heavily invested in content moderation to remove hateful content. However, Facebook's role in spreading hate speech has come under scrutiny during the conflict between the federal government and Tigrayan forces. The conflict resulted in an estimated 600,000 deaths due to fighting, starvation, and lack of healthcare. Although a peace deal was reached between the federal government and the Tigray People's Liberation Front, conflicts still persist in other regions of Ethiopia. Amnesty's report highlights Meta's data-driven business model as a significant threat to human rights in conflict-affected areas. This is not the first time Facebook has faced accusations of spreading incitement against ethnic Tigrayans, and Meta is currently facing a lawsuit over its alleged failures to address harmful content. Amnesty reviewed internal documents from Meta and found that the platform's algorithmic systems amplified harmful rhetoric against the Tigrayan community, while its content moderation systems failed to appropriately respond to such content. Meta stated that it is working on enhancing its capabilities to tackle violating content in Ethiopian languages, including Amharic, Afaan Oromoo, Tigrinya, Somali, and Afar. Ethiopia, with a population of 113.6 million, is Africa's second most populous country, and Amharic is considered the working language, although other languages are spoken as well.