China’s DeepSeek AI accused of censorship over Uyghur genocide
text_fieldsPhoto: Reuters
The rapid rise of DeepSeek, a Chinese AI chatbot, has sparked global concerns over security, censorship, and human rights violations.
The AI model, developed by a Chinese start-up, has gained significant traction for its low-cost accessibility, but its growing influence has drawn scrutiny from international regulators.
Several countries, including Italy and Australia, have already banned DeepSeek from government use due to security risks. Privacy regulators in Ireland, France, Belgium, and the Netherlands have also raised alarms over the chatbot’s data collection practices. However, beyond privacy concerns, the AI’s handling of sensitive political topics - particularly the alleged genocide of Uyghurs in Xinjiang - has triggered outrage.
Human rights activist Rahima Mahmut, who fled China in 2000, accused the chatbot of deliberately distorting facts to align with Beijing’s narrative. She expressed deep concern over DeepSeek’s response when asked about the Uyghur crisis. Instead of acknowledging reports of mass internment and repression, the AI dismissed the genocide claims as “severe slander” and insisted that human rights criticisms were an attempt to interfere in China's internal affairs.
For Mahmut, these statements were deeply personal. She has not heard from her family in eight years and later learned that her brother had been detained in a mass internment camp for two years. She believes DeepSeek is part of a broader effort by the Chinese government to suppress information and erase the history of the Uyghur people.
Despite marketing itself as a “world-leading AI assistant” that provides “helpful and harmless responses,” DeepSeek’s censorship of politically sensitive issues has raised serious ethical concerns.
As the chatbot continues to amass millions of downloads globally, calls for greater transparency and accountability in AI regulation are growing louder.