London: Despite claiming that it had increased efforts to remove extremist content from its platform, a new investigative report on Monday revealed that Meta-owned Facebook has failed to thwart such content on the platform.
The report revealed that the photos of beheadings, extremist propaganda and violent hate speech related to Islamic State and the Taliban were shared for months within Facebook groups over the past year.
According to the report by the Institute for Strategic Dialogue, a think tank that tracks online extremism, extremists have turned to the social media platform as a weapon 'to promote their hate-filled agenda and rally supporters' on hundreds of groups.
These groups were discovered by Moustafa Ayad, an executive director in the institute,
"It's just too easy for me to find this stuff online," he said. "What happens in real life happens in the Facebook world.
"It's essentially trolling — it annoys the group members and similarly gets someone in moderation to take note, but the groups often don't get taken down. That's what happens when there's a lack of content moderation," Ayad added.
These groups have popped up across Facebook over the past 18 months. Some of the posts were tagged as "insightful" and "engaging" by a new Facebook tool released in November.
The findings of the report were shared with Politico, who notified Meta about the presence of these groups. Meta subsequently removed Facebook groups promoting Islamic extremist content.
"We have removed the Groups brought to our attention," a Meta spokesperson said. "We don't allow terrorists on our platform and remove content that praises, represents or supports them whenever we find it.
"We know that our enforcement isn't always perfect, which is why we are continuing to invest in people and technology to remove this type of activity faster, and to work with experts in terrorism, violent extremism and cyber intelligence to disrupt misuse of our platform," the statement concluded.
In October, documents leaked by whistleblower Frances Haugen revealed that Facebook's automated systems, designed to identify hate speech and extremist content, struggles when it comes to non-English languages.
The documents reveals that in some of the world's most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts.