The parent company of the social media giants Facebook and Instagram, Meta, has reportedly allowed advertisements that are discriminatory, hateful, nefarious, and incite violence against Muslims, particularly during the ongoing Lok Sabha elections.

A report, revealed by the non-sectarian diasporic organization India Civil Watch International and the corporate watchdog Eko, indicates that a series of adverts containing known hateful slurs targeting the Muslim community were approved midway through the election process.

The ads, submitted to Meta’s ad library, contained slurs towards Muslims in India, such as “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned.”

Some adverts even called for the execution of opposition leaders, falsely accusing them of wanting to eradicate Hindus from India. These adverts, submitted by India Civil Watch International (ICWI) and Ekō, aimed to test Meta's ability to detect and block harmful political content but were approved midway through the election process.

The timing of these approvals, during a highly contentious election where Prime Minister Narendra Modi's Hindu nationalist BJP sought re-election, has raised significant concerns.

Critics argue that Modi's government has perpetuated a Hindu-first agenda, leading to increased persecution of India's Muslim minority. The BJP has been accused of using anti-Muslim rhetoric to sway voters, further exacerbating religious tensions in the country.

Last month, BOOM reported on the surge of surrogate ads targeting BJP opponents, totaling ₹3.7 crore in March and escalating to over ₹4.4 crore in April. These ads increasingly featured Islamophobic content. Following BOOM's report, Meta banned MemeXpress, which spent ₹1.1 crore on Facebook ads in March.

However, a new page, Meme Hub, emerged, spending ₹72 lakh on ads in April despite minimal followers. Karnataka Police issued a notice to a BJP Karnataka official handle for an inflammatory post, prompting removal due to violations of the Representation of People Act. BJP's official Instagram posted an animated video portraying Muslims as invaders and insinuating Congress's alleged favoritism toward them, later deleted.

Ads on Meme Hub and other surrogate pages consistently depicted Congress as anti-Hindu and pro-Muslim, echoing BJP's narrative, including claims from top leaders like Prime Minister Narendra Modi. Hindu-Muslim polarization and Islamophobia were recurring themes, with ads vilifying BJP opponents as detrimental to Hindus and favorable to Muslims. Many ads employed Islamophobic tropes and hate content, aligning with BJP's messaging.

Despite Meta's pledge to prevent the spread of AI-generated or manipulated content during the Indian election, the report indicates that the company's systems failed to detect the manipulation in the approved adverts. While some adverts were rejected for violating community standards, others containing similar inflammatory content targeting Muslims were greenlit.

Maen Hammad, a campaigner at Ekō, lambasted Meta for profiting from hate speech, accusing the company of enabling supremacists and racists to spread vile rhetoric unchecked. The report also highlights Meta's failure to recognize the political nature of the adverts, allowing them to circumvent India's election rules banning political advertising in the 48 hours before polling begins.

In response to the allegations, a Meta spokesperson emphasized the company's requirement for advertisers to comply with all applicable laws and community standards. However, critics argue that Meta's mechanisms for detecting and preventing hate speech and disinformation are inadequate, particularly during critical elections.

This isn't the first time Meta has faced criticism for its handling of political content in India. Previous reports have accused the platform of allowing the spread of Islamophobic hate speech and calls to violence, which have resulted in real-life incidents of riots and lynchings.

The report's findings suggest that Meta's efforts may fall short in addressing the rampant spread of hate speech and disinformation during critical elections.

The revelations have sparked renewed scrutiny of Meta's content moderation policies and raised questions about its ability to safeguard democratic processes worldwide. With elections being pivotal moments for shaping the future of nations, the role of social media platforms in facilitating informed discourse without inciting violence or spreading falsehoods remains a pressing concern.

Tags: