Analysis: China Grapples With Its Own Epidemic of Deepfake Porn and Fraud
Listen to the full version
Elon Musk’s artificial intelligence (AI) firm xAI is drawing growing scrutiny from governments around the world after its chatbot Grok let users create and share sexually explicit AI-generated pictures of children and women on his social media platform X.
The controversy has raised alarm bells with regulators in China, where the rapid development of generative AI is arming lawbreakers with powerful, low-cost tools that they use to generate nonconsensual deepfake images for illegal gains.
Unlock exclusive discounts with a Caixin group subscription — ideal for teams and organizations.
Subscribe to both Caixin Global and The Wall Street Journal — for the price of one.
- DIGEST HUB
- xAI’s chatbot Grok enabled users to create and share sexually explicit AI-generated images, drawing global regulatory scrutiny and concern about deepfake abuses, especially in China.
- China had 515 million generative AI users by June 2025; misuse includes telecom fraud, celebrity deepfakes, scam ads, and rampant AI-generated pornography and face swapping.
- Platforms like Douyin removed over 380,000 explicit videos in 2025; China is considering stronger legislation, while the US passed the Take It Down Act targeting nonconsensual AI deepfakes.
Elon Musk’s AI company, xAI, has come under increasing global scrutiny following an incident where its AI chatbot, Grok, enabled users to create and share sexually explicit, AI-generated pictures of children and women on the social media platform X. This controversy has alarmed regulators, especially in China, where authorities have become concerned about the rapid advancement of generative AI technologies enabling lawbreakers to easily and cheaply create nonconsensual deepfake images for illegal purposes[para. 1].
The misuse of generative AI has become rampant in China. According to government data, as of June 2025, the number of generative AI users in China reached 515 million, representing an increase of 266 million since December 2024. Young people are particularly active users of such technology in the country[para. 2][para. 3]. One high-profile case involved Hong Kong actor Louis Koo, whose voice and likeness were synthetically used in a fraudulent promotional video for an online gambling platform. His management was compelled to issue a statement denying his involvement, drawing attention to the risks of deepfake technology in fraud[para. 4].
Cybersecurity experts note that deepfake technology is widely used in emerging forms of telecom fraud since AI-generated videos are increasingly realistic and can learn nuanced human behaviors, making detection by the naked eye very difficult. Deepfake pornography, especially targeting minors, is also a growing issue: a survey by the Internet Society of China reported over 40% of respondents experiencing harassment by such content[para. 5][para. 6].
While the basic strategies of scams have remained consistent, they have shifted from online forums to short-video platforms, where much of the illegal content is now created with AI. For example, scammers generate fake video ads for legitimate financial products and upload them to video platforms, which then direct individuals to fraudulent lending schemes. Romance scams, known as “pig-butchering,” often use AI-generated adult content and celebrity face-swapping to attract victims[para. 7][para. 8][para. 9].
AI has also enabled a gray market where fraudsters defeat facial recognition, open counterfeit accounts, obtain loans, and impersonate people for blackmail. Local generative AI models in China are usually equipped with compliance mechanisms, adding content labels to flag AI-generated material. However, these safeguards mainly apply to major platforms with large user bases; individual developers using overseas, open-source models evade such regulations, complicating law enforcement efforts[para. 10][para. 11][para. 12][para. 13].
In September, Chinese authorities highlighted a case where a Zhejiang-based company's app was removed from stores after failing cybersecurity assessments for its deepfake upload service[para. 14]. Since mid-2025, major Chinese video platforms like Douyin, Kuaishou, and Bilibili have reported increases in illegal AI-generated content. Douyin alone removed over 380,000 videos in 2025 for violating pornography rules, with other platforms enacting similar cleanups[para. 15][para. 16].
AI-generated fake news exacerbates public anxiety and spreads quickly. Police have detained individuals who used AI to fabricate sensational fake news, such as a video of a deadly tunnel collapse, solely to attract online traffic[para. 17]. Because AI lowers barriers for such crimes, risk control specialists advocate for proactive detection systems to preempt illegal activity. Developing AI capable of identifying fake product ads or recognizing materials used in illegal AI videos is seen as a vital response[para. 18][para. 19].
Legislation is another front: in May 2025, U.S. President Donald Trump signed the Take It Down Act, which criminalizes the nonconsensual publication of intimate images, including AI-generated deepfakes. Offenders can face up to three years in prison, a move inspired in part by the viral spread of obscene AI-generated images of pop star Taylor Swift on social platform X a year earlier. China may consider similar legal frameworks to combat AI-enabled crimes[para. 20][para. 21][para. 22].
- xAI
- xAI, Elon Musk's AI firm, faces increasing government scrutiny. Its chatbot Grok allowed users to create and share sexually explicit AI-generated images of children and women on X. This has raised alarms with regulators, especially in China, where generative AI misuse for nonconsensual deepfakes is rampant.
- Douyin
- Douyin, the Chinese version of TikTok, has actively engaged in extensive clean-ups of accounts that have posted sexually suggestive AI-generated images to attract followers. In 2025, the platform removed over 380,000 videos due to violations of its obscenity and pornography rules. This highlights Douyin's efforts to combat illegal AI-generated content.
- Kuaishou
- Kuaishou is one of China's major video platforms. Along with Douyin and Bilibili, it has become a "dumping ground" for illegal AI-generated content. Kuaishou has taken action to combat this, similar to other platforms, by cleaning up accounts that post sexually suggestive AI-generated images to gain followers.
- Bilibili
- Bilibili, a major Chinese video platform, is actively combating illegal AI-generated content. Since mid-2025, Bilibili, along with Douyin and Kuaishou, has been a "dumping ground" for such material. The platform has taken action against videos and accounts violating its rules, including those posting sexually suggestive AI-generated images.
- Early 2024:
- Obscene AI-generated images of U.S. pop star Taylor Swift went viral on social media platform X, triggering public outcry.
- December 2024:
- The number of generative AI users in China was recorded, used as a comparison for June 2025 figures.
- May 2025:
- U.S. President Donald Trump signed the Take It Down Act into law, primarily targeting nonconsensual publication of intimate images, including AI-generated deepfakes.
- As of June 2025:
- The number of generative AI users in China reached 515 million, an increase of 266 million from December 2024.
- In 2025:
- Louis Koo became a target of fraudsters who used deepfake technology for a promotional video, prompting his management firm to deny involvement.
- In 2025:
- A survey by the Internet Society of China showed that more than 40% of respondents were harassed by deepfake pornography.
- In 2025:
- Douyin removed more than 380,000 videos that violated rules on obscene and pornographic content.
- Since mid-2025:
- AI has become a frequent topic in content governance reports from platforms Douyin, Kuaishou, and Bilibili.
- September 2025:
- The Cyberspace Administration of China publicized a law enforcement case in which a Zhejiang-based tech firm's app was removed from app stores for failing a cybersecurity assessment.
- December 2025:
- Police in Jincheng, Shanxi province, detained a person for using AI to fabricate a video of a deadly tunnel collapse to boost personal account traffic.
- MOST POPULAR





