Study exposes Chinese censors' deepest fears

See allHide authors and affiliations

Science  22 Aug 2014:
Vol. 345, Issue 6199, pp. 859-860
DOI: 10.1126/science.345.6199.859

Police corral a man in Shanghai in February 2011 in response to Internet exhortations for a “Jasmine Revolution.”


Behind China's vaunted system of Internet censorship are throngs of specialized police officers, fake commentators, and ever-changing technologies. But China watchers have puzzled over the system's modus operandi. Some posts are swiftly culled, whereas others on seemingly more sensitive topics are left untouched. In the most revealing study yet of Chinese censorship, on page 891 researchers at Harvard University describe how they peered behind the curtain to find out what China's censors—and presumably the government officials operating behind the scenes—fear most: discussion of mass protests and other forms of collective action.

The unprecedented participatory experiment on China's blogs, microblogs, and forums “probes much deeper than earlier studies,” says Noah Smith, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, who was not involved with the paper. The study also flouts the conventional wisdom that the censorship apparatus is designed to squelch criticism of the Communist Party or its leaders.

China's Internet censorship is thought to be the most pervasive in the world. Online forums employ in-house censors whom users playfully dub “big mamas,” while so-called “50-cent party members” post progovernment comments on social media sites. (The moniker stems from the fee they supposedly receive for each post.) And according to state media, a veritable army of 2 million government employees monitors microblogs and drafts reports on blog posts for leaders.

Those soldiers are not always marching to the same tune. Lists of censored keywords—terms that automatically get a post deleted or flagged for review—vary from location to location and from one website to the next. Complicating the task for researchers, social media in China is highly decentralized. Whereas Twitter, Facebook, and a few others dominate in the United States, in China users split their attention among hundreds of websites, some of which are tied to local areas and don't allow outsiders to post. “The logistics of studying that are complicated,” says political scientist Gary King of the Institute for Quantitative Social Science at Harvard University.

When King and Ph.D. students Jennifer Pan and Margaret Roberts began examining censorship in China in 2011, many scholars assumed that calling for policy changes, criticizing government leaders, and raising sensitive topics like the Tiananmen Square crackdown in 1989 were verboten. To test that assumption, the trio downloaded millions of social media posts from more than 1300 sites between January and July 2011, then selected roughly 127,000 of them to examine in more detail. Hoping that an analysis of which posts were deleted and which were allowed to stand might offer a “bird's-eye view of [a] very heterogeneous process,” King says, they watched in real time as posts were taken down. Censorship in China, King says, is “like an elephant tiptoeing around. It leaves big footprints.”

In most cases, censors reacted swiftly, deleting messages within a day of posting. They also seemed to follow a surprising logic. The researchers found that posts on topics they themselves classified as highly sensitive were only slightly more likely than average to be deleted—24% of posts, versus 13% overall. That was “completely unexpected,” King says. They next looked at bursts of posts following significant events. During events with potential for collective action, the vast majority of posts were censored—regardless of whether they supported or criticized the state.

That study, published in American Political Science Review in May 2013, was blind to posts that never went online in the first place because they were snagged in an automated censorship filter. To truly understand what is censored in China, King and colleagues realized, they would need to write their own posts. And that meant creating the first randomized experimental study on censorship in China.

Over three 1- to 2-week periods last year, the researchers oversaw assistants in China and the United States who opened 200 user accounts at 100 sites and then authored 1200 unique posts. Some commented on events involving collective action, such as volatile demonstrations over government land grabs in Fujian province. Others responded to events involving no collective action, like a corruption investigation of a provincial vice governor. For each event, the assistants authored both pro- and antigovernment posts.

Within the same period, the researchers found, posts advocating collective action were between 20% and 40% more likely to be censored than were posts not advocating it. Posts critical of the government, on the other hand, were not significantly more likely to be censored than supportive posts—even when they called out leaders by name. “Criticisms of the state are quite useful for the government in identifying public sentiment, whereas the spread of collective action is potentially very damaging,” Roberts explains. China's so-called Jasmine Revolution, in which protesters inspired by the Arab Spring in early 2011 called for democracy, is a case in point. Many sites promptly blocked words like “jasmine” and “Egypt.”

To trawl for insights into how the censorship machine works, the researchers set up a bulletin board system—a popular type of community forum in China—and asked for advice about “how to stay out of trouble with the Chinese government,” King says. Staff at the software platform they used cheerfully answered questions posed online and by phone. (The trio “didn't advertise that we were Harvard researchers,” King says, and no one asked their identities.) One piece of advice was to muscle up on staff who can watch for troublesome posts: Even when a site uses automated review, King says, most posts are still vetted by a pair of eyeballs.

The Harvard team didn't demystify the process entirely. Yu Xie, a sociologist at the University of Michigan, Ann Arbor, says that although the study is methodologically sound, it overemphasizes the importance of coherent central government motives. Political outcomes in China, he notes, often rest on local officials, who are evaluated on how well they maintain stability. Such officials have a “personal interest in suppressing content that could lead to social movements,” Xie says.

One factor that complicates any study of censorship in China is a rapidly shifting social media landscape. Local governments now frequently insert themselves into conversations as well as control them. And the study omitted WeChat, a burgeoning social network set up by Chinese Internet giant Tencent that combines features from Facebook and the mobile application WhatsApp. Earlier this year, the app started censoring keywords and in some cases deleting entire accounts. That, King says, may be fodder for future studies.

  • * in Shanghai, China

View Abstract

Stay Connected to Science

Navigate This Article