Caixin

Reporter’s Notebook: How Millions of Douyin Users Decide What Stays and What Disappears

Published: Mar. 20, 2025  2:48 a.m.  GMT+8
00:00
00:00/00:00
Listen to this article 1x
ByteDance Ltd.’s Douyin offices in Beijing. Photo: Bloomberg
ByteDance Ltd.’s Douyin offices in Beijing. Photo: Bloomberg

A week ago, while scrolling through Douyin — China’s version of TikTok — I flagged a video I suspected of spreading false information. Instead of a simple report submission, however, I stumbled upon an option inviting users to participate in community moderation, an experiment in user-driven governance.

Curious, I clicked through. A quick identity verification, a multiple-choice test on platform policies, and I was in. Just like that, I had become a Douyin “community reviewer.”

loadingImg
You've accessed an article available only to subscribers
VIEW OPTIONS

Unlock exclusive discounts with a Caixin group subscription — ideal for teams and organizations.

Subscribe to both Caixin Global and The Wall Street Journal — for the price of one.

Share this article
Open WeChat and scan the QR code
DIGEST HUB
Digest Hub Back
Explore the story in 30 seconds
  • Douyin uses a community moderation system involving 5.96 million users to decide if videos should be recommended, although it struggles with ambiguous cases and misinformation.
  • The effectiveness of this approach is questioned, as users are often uncertain about video credibility, and platforms provide limited guidance on verifying content accuracy.
  • The lack of responsibility in misinformation management highlights a need for improved media literacy, raising concerns about the role of algorithms and user participation in content governance.
AI generated, for reference only
Explore the story in 3 minutes

[para. 1] Recently, Douyin — China's version of TikTok — has introduced a feature that allows users to participate in community moderation as a form of user-driven governance. This feature invites users to become "community reviewers," where they help in flagging content that might be spreading false information.

[para. 2][para. 3] After passing identity verification and a policy test, users like the author become community reviewers and are assigned 20 short videos a day. Their task is to decide whether these videos should be recommended to others. Currently, around 5.96 million people are participating in this program, influencing what content appears prominently on the platform.

[para. 4][para. 5] The process is not as straightforward as it seems, as many videos fall into a gray area, where they neither clearly violate rules nor are entirely harmless. A common genre involves "follow-along" videos, which mimic trending jokes or dances, often without clear reasons for being flagged.

[para. 6] Reviewers are given feedback in the form of a red-and-blue graph, showing how others have voted. Patterns emerge, indicating that reviewers often reject advertisements and are quick to oppose content that appears crude or induces panic. Many times, the real decision at hand is whether a video should be taken down entirely.

[para. 7][para. 8][para. 9] Reviewing can be difficult, especially with videos asserting unverified historical facts or unlikely events. For example, one video claimed a bizarre historical event involving Emperor Zhu Di, which 75% of reviewers opposed due to its unverified nature. Others, like a video claiming North Korean artillery destroyed tanks, were approved by some reviewers, displaying varied reliability in judgment.

[para. 10][para. 11] Unverified claims about current affairs often get the nod from reviewers, under the justification of raising awareness or aiding in rights protection. This trend occurs in various serious public issues where traditional trust in news organizations fails on social media platforms. Users approve content sometimes based on narrative interest rather than factual accuracy.

[para. 12][para. 13] Given the opaque nature of recommendation algorithms, misinformation easily proliferates on social media platforms. This creates a fertile ground where narratives, even when based on unverifiable or fabricated facts, can influence public opinion. The problem is exacerbated when the platforms do not effectively filter misinformation or enhance users' media literacy.

[para. 14][para. 15] An example of the inadequacy of current practices is how social platforms often intervene only when misinformation poses significant risks, like worsening public relations or causing financial loss. Despite this, platforms engage millions of users in the illusion of participatory governance without genuinely addressing misinformation.

[para. 16][para. 17] Algorithms lack the capability to discern opinions or meanings in true human context, leaving authenticity as a secondary concern in machine moderation. This role should ideally involve professional journalists, over user reviews, as the community's perspective might not reflect professional opinion or solve underlying issues.

[para. 18][para. 19] While platforms focus on engagement, they overlook fostering media literacy, leaving users to navigate a minefield of misinformation. Government mandates drive some initiatives, like screen break reminders, but there is little effort to encourage users to scrutinize content accuracy critically.

[para. 20][para. 21] The issue with AI models like DeepSeek is that their information, often unverifiable itself, can lead to hallucination — when AI generates content that seems real but is made up. The author suggests that media literacy should be a shared responsibility between platforms and users to mitigate the growing impact of unverifiable noise on digital information ecosystems.

AI generated, for reference only
Subscribe to unlock Digest Hub
SUBSCRIBE NOW
NEWSLETTERS
Get our CX Daily, weekly Must-Read and China Green Bulletin newsletters delivered free to your inbox, bringing you China's top headlines.

We ‘ve added you to our subscriber list.

Manage subscription
PODCAST
Caixin Deep Dive: Former Securities Regulator Yi Huiman’s Corruption Probe
00:00
00:00/00:00