Why Do Team-Authored Papers Get Cited More?

Science  14 Sep 2007:
Vol. 317, Issue 5844, pp. 1496b-1498b
DOI: 10.1126/science.317.5844.1496b

In their Report “The increasing dominance of teams in production of knowledge” (18 May, p. 1036), S. Wuchty et al. observe that references with multiple authors receive more citations than solo-authored ones. They conclude that research led by teams has more quality than solo-led research, but inappropriate control of confounding (including confounding by publication type) makes several alternative explanations plausible. The Institute for Scientific Information (ISI) Web of Science database includes not only original research but also editorials and letters to the editor (1). This kind of scientific literature is both more frequently authored by just one or two researchers and less frequently cited. Significantly, it would also be consistent with the observed relationship between citations and actual team size.

More importantly, there are several ways a larger group of authors can influence the number of citations of their common work, beyond the quality of the paper. We can think of a reference by n authors as having n times more proponents than a solo-authored one. This would include self-citations in other papers (as already observed in the study), citations in other kinds of scientific literature, and an increased number of research groups being familiar with the article. Moreover, scientific communication is not limited to journals. The longer the author list is, the greater the probability of the paper being presented to several conferences is, especially if the team is multidisciplinary.

Linking organizational features of research with the quality of its output is of utmost importance, because it will eventually provide policy-makers and funding bodies with hard evidence for the prioritization of specific features of research proposals. We should therefore be extremely cautious when interpreting this kind of study.

References and Notes

  1. 1.

In demonstrating the increasing dominance of teams in academic and patent publishing, Wuchty et al. use a circular argument regarding scientific progress, defining impact as “the number of citations each paper and patent receives.” Technically speaking, the number of citations reflects popularity, not necessarily quality.

In academic publishing, authors clearly copy the citations from other papers (1). The resulting frequency dependence in citation rate means that citations of a successful paper increase geometrically, with crucial dependence on initial conditions (2). An effective strategy, therefore, is quite similar to product marketing (3): Try to get noticed at the beginning and then hope the process will take over through frequency-dependent copying. Co-authoring with a well-known researcher clearly helps in this respect (4), but larger teams also have an inherent advantage in their ability to “seed” the process soon after publication through self-citation as well as citation by a larger network of colleagues.

With copying underlying much of popular cultural change (5), the real question is, how does number of citations relate to quality? One of the studies that Wuchty et al. cite even reports that “citations are not a reliable indicator of scientific contribution at the level of the individual article” (6). With pop music, for example, the opportunity to view (and copy) other people's choices leads to drift in the most downloaded songs (7), such that popularity and quality become decoupled. How can we assume academic citation is so different?

References and Notes

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.

Wuchty et al. found that from 1955 to 2000, the relative citation rate for publications with multiple authors increased across a broad range of academic disciplines. The Relative Team Impact (RTI) citation statistics presented in their Fig. 2, however, seem to be for entire teams. Dividing by mean team size shows that relative per capita citation rate for teams fell by over a third over this 45-year period, compared to solo authors, for science and social science. The only exception is arts and humanities, where teams are rare in any case. If citation rates measure performance, then on average, researchers still perform better when they work alone. The main payoff from joining a team is increased odds of a very heavily cited publication.

Wuchty et al. examine the growth of collaborative research in a variety of scientific fields and how it has affected the quality of research. They found that research produced collaboratively is of a higher quality, as measured by citations, than research reported in single-author articles. They argue that although “the increasing capital intensity of research may have been a key force in laboratory sciences where the growth in teamwork has been intensive… it is unlikely to explain similar patterns in mathematics, economics, and sociology, where we found that growth rates in team size have been nearly as large” (p. 1038). I offer an explanation for the increase in collaborative research in the social sciences (1). I argue that we are seeing more collaborative work in the social sciences because there are selection pressures on those who do not collaborate. Given that collaborative research is generally of a higher quality, and careers in the sciences are profoundly affected by the quality of one's research, scientists who are not prepared to collaborate are becoming a smaller portion of the population of researchers, even in the social sciences. Those who are unwilling or unable to collaborate are being weeded out at a higher rate than those willing and able to collaborate.


  1. 1.


Building on our findings that (i) Science has made a nearly universal shift toward teamwork and (ii) highly cited research is now more frequently produced by teams rather than solo authors, the Letter writers raise questions regarding mechanisms and interpretations.

One question is whether citation rates reflect a paper's quality. Valderas and Bentley suggest that team-authored papers receive more citations than solo-authored papers because of a team advantage for self- promotion. Although citations gained are likely a function of both a paper's scientific contribution and marketing, several reasons suggest that self-promotion modestly affects citation rates on balance.

First, our paper presented analysis with self-citations included and with self-citations removed (always excluding the editorials and letters to the editor that concern Valderas). The results change little when self-citations are excluded, suggesting that the team citation advantage holds even without self-promotion. Second, a self-promotion argument does not explain the team citation advantage for patents, where citation decisions are primarily made by disinterested third-party experts (1). Third, we find that the team citation advantage over solo-authored papers is growing over time for teams of any fixed size, yet a self-promotion argument suggests a static team advantage, not an increasing one. Finally, Bentley cites Salganik et al. (2) as evidence that “bad” songs (i.e., by analogy, weak papers) can be turned into a hit through false buzz about the song, a process that could be created in scientific circles through self-promotion. However, Salganik et al. (2) demonstrate that this effect works only on a song-by-song basis. When average effects are examined, average popularity and average quality are highly correlated. Our measures of average citations taken over large numbers of papers would then appear to be a reasonable measure of scientific influence.

More generally, we avoided the term “quality” and used the broader constructs of “impact” and “influence” to construe the meaning of a paper's citation rate. A paper that is high “quality” by some standard (functional contribution, breadth of application, timelessness, elegance, etc.) will typically have little impact if it is not cited.

Our analysis focuses on impact at the paper level. Buckley is interested in the impact of individual authors. He attempts to infer individual impact from our paper-level analysis, but this inference is not possible without knowledge of the amount of time each author contributes per paper. His implicit assumption is that a paper with N authors requires N times as much collective effort as a solo-authored paper. A more unexceptionable assumption may be that multi-authored papers require less effort per person, which would explain the prevalent observation that people who tend to write in teams tend to write more papers. With higher rates of publication, team authorship may be associated not just with more citations, but more citations per unit of author's time. Nevertheless, assessment of the impact of individual authors requires data on time inputs, an important direction for future work.

Wray provides a possible interpretation for why scientists work in teams. As we noted in our paper, there are many possible mechanisms behind the universal structural shift toward teams in science, and we look forward to future work that assesses and disentangles potential causes.


  1. 1.
  2. 2.

Related Content

Navigate This Article