A recent analysis has found 1,326 AI-generated images and videos circulating in India that spread Islamophobic narratives, indicating how generative artificial intelligence is increasingly being weaponised to produce visual hate content against Muslims, thereby amplifying communal hostility online, deepening social divisions, and raising concerns about the growing nexus between technology and hate propaganda.

The findings were part of a report titled AI-Generated Hate in India, released by the Centre for the Study of Online Hate (CSOH), which examined anti-Muslim visual content produced through generative AI between May 2023 and May 2025. The study reviewed material shared by 297 public accounts across X (formerly Twitter), Facebook, and Instagram, identifying coordinated patterns in how such AI-created content was used to reinforce existing stereotypes and incite prejudice.

These synthetic visuals—often sexualising Muslim women, dehumanising entire communities, or glorifying violence—have collectively drawn over 27 million engagements across social media platforms, embedding prejudice deep into the digital ecosystem. The impact of this AI-driven hate has been profound, normalising bigotry, fuelling misinformation, and escalating online and offline hostility towards Muslims, while corroding social cohesion and undermining democratic values.

It marks the first comprehensive investigation into how generative AI tools such as Midjourney, Stable Diffusion, and DALL·E are being harnessed to produce synthetic imagery that vilifies Muslims. The study documents how “AI-generated images that dehumanise, sexualise, criminalise, or incite violence against Muslims” are being used as potent instruments of propaganda and manipulation.

While the global use of AI-generated imagery exploded after mid-2022, the report notes that its weaponisation for hate in India’s volatile political and digital landscape remains “critically underexplored.” It warns that “the widespread adoption of generative Artificial Intelligence in India may well result in an explosion of such content with grave implications for religious minorities,” adding that “a relatively small number of accounts have created and amplified a significant volume of hateful speech in a limited time period.”

The phenomenon is not limited to India. Across Europe, far-right parties have increasingly exploited text-to-image generation to stoke xenophobic fears. After the 2024 Southport stabbings in the United Kingdom, AI-generated visuals were circulated to spread Islamophobic disinformation about the attacker’s identity.

India, however, presents what the report describes as “a particularly worrisome case.” It observes that anti-Muslim sentiment has deepened over the past decade, with digital tools increasingly deployed to normalise hatred through visual culture. “The proliferation of hateful AI-generated content threatens to further colonise the Indian information sphere,” the report states, “which is already marked by rampant misinformation, anti-minority bias, and a severe crisis of credibility.”

The dataset analysed for the study included 297 public accounts across X (formerly Twitter), Facebook, and Instagram. Of these, 146 were on X, 92 on Instagram, and 59 on Facebook, together producing 1,326 posts featuring AI-generated visuals with explicitly hateful content. The cumulative engagement across platforms reached 27.3 million, with Instagram leading at 1.8 million interactions, followed by X with 772,400 and Facebook with 143,200.

The report identifies four dominant themes within the hateful imagery: the sexualisation of Muslim women, exclusionary and dehumanising rhetoric, conspiratorial narratives, and the aestheticisation of violence. Sexualised depictions of Muslim women drew the highest engagement—over 6.7 million interactions—demonstrating, as the report notes, that “the gendered character of much Islamophobic propaganda fuses misogyny with anti-Muslim hate.”

Conspiracy theories such as “Love Jihad,” “Population Jihad,” and “Rail Jihad” were visually reinforced through AI-generated content portraying Muslims as demographic or national threats. In other instances, Muslims were depicted as snakes wearing skullcaps, a “dehumanising metaphor that frames them as deceptive, dangerous, and deserving of elimination.” Meanwhile, stylised or animated AI aesthetics—such as Studio Ghibli–style imagery—made violent content appear palatable and even humorous, extending its reach among younger audiences.

The report names OpIndia, Sudarshan News, and Panchjanya among Hindu nationalist media outlets that played “a central role in producing and amplifying synthetic hate,” embedding Islamophobic narratives into the mainstream. Despite hundreds of violations being reported, “none of the 187 posts flagged for community guideline breaches were removed,” revealing the failure of major platforms to uphold their moderation policies.

Tags: