Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
election commmission
access_time 22 Nov 2024 4:02 AM GMT
Champions Trophy tournament
access_time 21 Nov 2024 5:00 AM GMT
The illness in health care
access_time 20 Nov 2024 5:00 AM GMT
The fire in Manipur should be put out
access_time 21 Nov 2024 9:19 AM GMT
America should also be isolated
access_time 18 Nov 2024 11:57 AM GMT
Munambam Waqf issue decoded
access_time 16 Nov 2024 5:18 PM GMT
DEEP READ
Munambam Waqf issue decoded
access_time 16 Nov 2024 5:18 PM GMT
Ukraine
access_time 16 Aug 2023 5:46 AM GMT
Foreign espionage in the UK
access_time 22 Oct 2024 8:38 AM GMT
exit_to_app
Homechevron_rightWorldchevron_rightFacebook fails to flag...

Facebook fails to flag violent hate speeches in Ads

text_fields
bookmark_border
Facebook fails to flag violent hate speeches in Ads
cancel

Facebook once again failed to detect violent hate speech in advertisements submitted to the platform by the nonprofit groups Global Witness and Foxglove. A similar test was run by Global Witness in March which the social media giant failed.

Whistleblower Frances Haugen in her 2021 congressional testimony had said Facebook has an ineffective method for moderation and it is "literally fanning ethnic violence".

Global Witness created 12 text-based ads using dehumanising hate speech calling for genocidal actions against three main ethnic groups in Ethiopia - the Amhara, the Oromo, and the Tigrayans. Facebook approved the ads for publication and wider circulation. The non-profit did not actually post the ads on Facebook after it was approved.

The non-profit organisation said campaigners picked out the worst cases of hate speech they could think of. They are the ones Facebook should be able to detect easily. "The content in the ad wasn't coded or dog whistles". The text explicitly said a certain type of person or people is not a human and should be starved to death.

A similar test in March was based on the situation in Myanmar.

Global Witness informed Meta about the undetected violations. The parent company said the hateful content should not have been approved and pointed out the work the tech firm has done to detect the same. A week later, the non-profit repeated the exercise with two ads. Once again, Facebook approved them.

Director of Foxglove, Rosa Curling, said the only possible explanation for the approval of hateful content is that there is no one moderating them, reported AP. The blatant violent hate speech in the ads was written in Amharic, the most widely used language in Ethiopia.

In both cases, Global Witness received identical emailed statements from Meta. The Zuckerberg-led firm said it is invested heavily in safety measures in Ethiopia, adding more staff with local expertise and building our capacity to catch hateful and inflammatory content in the most widely spoken languages, including Amharic.

Meta has repeatedly refused to reveal how many content moderators in non-English speaking nations including Ethiopia and Myanmar.

Show Full Article
TAGS:FacebookEthiopiaMetaFacebook adsGlobal WitnessFoxglovehateful adsgenocidal adshate speech online
Next Story