You are currently viewing the abstract.View Full Text
Censorship has a long history in China, extending from the efforts of Emperor Qin to burn Confucian texts in the third century BCE to the control of traditional broadcast media under Communist Party rule. However, with the rise of the Internet and new media platforms, more than 1.3 billion people can now broadcast their individual views, making information far more diffuse and considerably harder to control. In response, the government has built a massive social media censorship organization, the result of which constitutes the largest selective suppression of human communication in the recorded history of any country. We show that this large system, designed to suppress information, paradoxically leaves large footprints and so reveals a great deal about itself and the intentions of the government.
Chinese censorship of individual social media posts occurs at two levels: (i) Many tens of thousands of censors, working inside Chinese social media firms and government at several levels, read individual social media posts, and decide which ones to take down. (ii) They also read social media submissions that are prevented from being posted by automated keyword filters, and decide which ones to publish.
To study the first level, we devised an observational study to download published Chinese social media posts before the government could censor them, and to revisit each from a worldwide network of computers to see which was censored. To study the second level, we conducted the first largescale experimental study of censorship by creating accounts on numerous social media sites throughout China, submitting texts with different randomly assigned content to each, and detecting from a worldwide network of computers which ones were censored.
To find out the details of how the system works, we supplemented the typical current approach (conducting uncertain and potentially unsafe confidential interviews with insiders) with a participant observation study, in which we set up our own social media site in China. While also attempting not to alter the system we were studying, we purchased a URL, rented server space, contracted with Chinese firms to acquire the same software as used by existing social media sites, and—with direct access to their software, documentation, and even customer service help desk support—reverseengineered how it all works.
Criticisms of the state, its leaders, and their policies are routinely published, whereas posts with collective action potential are much more likely to be censored—regardless of whether they are for or against the state (two concepts not previously distinguished in the literature). Chinese people can write the most vitriolic blog posts about even the top Chinese leaders without fear of censorship, but if they write in support of or opposition to an ongoing protest—or even about a rally in favor of a popular policy or leader—they will be censored.
We clarify the internal mechanisms of the Chinese censorship apparatus and show how changes in censorship behavior reveal government intent by presaging their action on the ground. That is, it appears that criticism on the web, which was thought to be censored, is used by Chinese leaders to determine which officials are not doing their job of mollifying the people and need to be replaced.
Censorship in China is used to muzzle those outside government who attempt to spur the creation of crowds for any reason—in opposition to, in support of, or unrelated to the government. The government allows the Chinese people to say whatever they like about the state, its leaders, or their policies, because talk about any subject unconnected to collective action is not censored. The value that Chinese leaders find in allowing and then measuring criticism by hundreds of millions of Chinese people creates actionable information for them and, as a result, also for academic scholars and public policy analysts.
Censorship of social media in China
Figuring out how many and which social media comments are censored by governments is difficult because those comments, by definition, cannot be read. King et al. have posted comments to social media sites in China and then waited to see which of these never appeared, which appeared and were then removed, and which appeared and survived. About 40% of their submissions were reviewed by an army of censors, and more than half of these never appeared. By varying the content of posts across topics, they conclude that any mention of collective action is selectively suppressed.
Science, this issue 10.1126/science.1251722
Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and—with their software, documentation, and even customer support—reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored.