YouTube profits from platform promoting Muslim hate, violating guidelines
text_fieldsYouTube is reportedly profiting from providing a platform that spreads and incites communal hatred, particularly targeting Muslims, through its Super Chat feature. This feature enables users to pay for highlighted messages during live streams, where individuals promoting Hindutva ideology often share divisive and inflammatory content, raising questions about YouTube's ability to moderate such interactions effectively and its role in monetising harmful speech.
The issue came to light with a live video by Ajeet Bharti, a popular creator with significant influence in Hindutva circles, who alleged a Muslim conspiracy involving “love jihad,” a term promoted by some groups to describe a purported campaign by Muslim men to convert Hindu women through marriage, according to The Wire report.
During the stream, a viewer paid for a Super Chat to inquire about forming a violent group to counter this perceived threat. This comment violated YouTube’s guidelines on violent and dangerous content, yet it remained on the platform despite being reported multiple times.
For instance, Bharti reportedly earned Rs 2,100 in one stream and Rs 14,000 in another for similar comments. YouTube’s revenue-sharing model allows creators to keep 70% of the Super Chat earnings, with the remaining 30% going to the platform, further fuelling concerns that hate speech and divisive content are generating profit for both creators and YouTube.
Research by the Reuters Institute highlights that nearly half of India’s population relies on social media for news, with 54% using YouTube as their primary source. With this extensive reach, the spread of hate speech and misinformation on the platform carries significant implications for social cohesion and public discourse.
YouTube’s policies theoretically prohibit hateful or violent content, including inflammatory Super Chats, and specify that funds from such messages will be donated to charity. However, the platform has not provided transparency about how these donations are made or what constitutes inappropriate content.
Instances of hate-filled content being left unchecked are not limited to Bharti’s broadcasts. The platform has allowed other channels, such as Sudarshan TV, to air divisive content targeting minorities, despite repeated reports and complaints from users. This content has attracted advertisements from prominent brands, indirectly allowing YouTube to monetise controversial and hateful narratives.
The channel amassed millions of views during India’s 2019 national elections, spreading messages widely perceived as anti-Muslim. YouTube’s moderation approach, or lack thereof, in these cases, reflects a larger trend: the platform appears unable or unwilling to implement robust moderation strategies for Super Chats and comments during live broadcasts.
While YouTube claims its automated systems detect most inappropriate comments, its own Transparency Report reveals that hate-filled or offensive comments constitute a mere 1.7% of removals, compared to 83.9% for spam or misleading content.
The report also indicates that YouTube removed over 26 lakh videos from India for guideline violations in the first quarter of 2024, highlighting India as a top violator country. Yet, the report does not specifically address Super Chats, leaving a significant gap in transparency about how YouTube handles these high-profile interactions.
The situation raises questions about YouTube's priorities and the effectiveness of its content moderation policies. While the company asserts it provides tools for users and creators to manage live chat content, reports of hate-filled Super Chats persisting on the platform suggest a lack of proactive enforcement. Critics argue that YouTube’s financial incentives may contribute to its apparent leniency, as hate and polarisation drive engagement and profit.