Policy Forum

The Scientific Wealth of Nations

See allHide authors and affiliations

Science  07 Feb 1997:
Vol. 275, Issue 5301, pp. 793-796
DOI: 10.1126/science.275.5301.793

The United States took much pleasure last summer from its performance in the Olympic Games, where it won many more medals than any other country. But was this the right measure of performance? Counting four points for gold, two for silver, one for bronze, and calculating the score relative to population size, a different picture emerges. Tiny Tonga was first, Australia led among the larger economies, and overall the United States ranked 37th, well behind most of the European countries (but not the United Kingdom, a lamentable 48th).

Similar questions arise when we ask about the quality of the scientific research output of countries. For many purposes, most notably overall advance in our understanding of nature, it is total output that matters. For other purposes—for example, in producing trained people or for underpinning industrial advances—output relative to country size [measured by population, gross domestic product (GDP), or other things] is more relevant.

In this paper I offer comparisons, from a variety of viewpoints, of scientific research outputs among several countries. I derive my analysis from a recent United Kingdom benchmarking study (1), which in turn draws heavily on earlier work (24), particularly an analysis of Australian research (5). These studies are all based largely on the Science Citation Index (SCI), established by the Institute for Scientific Information (ISI). This database covers scientific research publications from 79 countries and more than 4000 journals since 1981. The database has many shortcomings and biases (6), but overall it gives a wide coverage of most fields. We studied the 14-year period from 1981 to 1994, in which the ISI database totaled 8.4 million papers and 72 million citations.

Publications and Citations

The top 15 countries, ranked by the contribution of their scientists to the world's total number of publications in science, engineering, and medicine, from 1981 to 1994 (7, 8) accounted for 81.3% of the world's papers (Table 1). The top seven countries were the world's seven largest economies, the so-called G7 countries. The United States was dominant, publishing around 35% of the world's science. The United Kingdom was second, ahead of more populous countries like Germany, France, and Japan. The 15 countries that constitute the European Union (EU) rivaled the United States and produced ∼32% of all papers (2, 5).

Table 1.

The world's top countries, ranked by their share of the world's papers in science, medicine, and engineering (6). The table also shows citation shares, RCI (9), expenditure on R&D (22), and a measure of cost-effectiveness: citations per unit of government expenditure on R&D in G7 countries (in boldface) (1). Citation values are for the yearly average for the period from 1981 to 1984; expenditures are in £million and for 1991. Data for %GDP spent on R&D are for 1994, except for 1993 for Netherlands, Denmark, and Sweden and 1992 for Australia and Switzerland.

View this table:

In terms of these countries' percentage shares of all citations, the rankings are similar except for India and China. The relative citation impact, RCI [citations divided by publications (6, 9)], gives some measure of the quality of the average paper. In terms of the RCI, the United States still ranks first (10), and the top 15 countries by publications share include the top 8 by RCI, although significantly reshuffled. Only three of the G7 countries (United States, United Kingdom, and Canada) rank in the top 10 by RCI; the fourth is France, at 14th.

The top five countries by publication shares, which are also the five largest economies, invest proportionately more in research and development (R&D) than do most other countries (Table). The smaller countries with high-ranking RCI, notably Switzerland and Sweden (second and third by RCI), are relatively high investors in R&D. India and China have low R&D investment and also have a low RCI.

The above analysis can be broken down by field (Table) (7). As expected, the United States ranked first by citation shares in all 20 fields we discriminated; percentages ranged from 70% (in psychology) to 37% (in chemistry). The United Kingdom was second in 15 of the 20 fields, and placed lowest (fifth) in physics. The rankings are more varied when the quality measure of RCI is used, although the United States still showed strongly, followed by Switzerland, Sweden, United Kingdom, and others.

Table 2.

Comparative advantage in citations, RCA (11), and a measure of absolute quality, RCI (9), for each of the 20 fields of science defined by the ISI (6, 7, 23), for the United Kindom. Also shown are the top five countries, ranked by share of the world's citations and by RCI in each of the 20 fields. Abbreviations are given in Table and Fig. 1. BE, Belgium.

View this table:

Bibliometric analysis can also uncover patterns of relative investment or relative advantage of a country in a particular subject, compared with the world average. The Australian study (5) defined a country's “revealed comparative advantage” (RCA) in a specified scientific field as the fraction of all that country's citations (or papers) that are in that particular field, relative to the fraction of the world's citations that are in that field (11). Thus, if the RCA is well above 1, a comparative advantage is revealed, and vice versa. As shown in Table, on the basis of this index, the United Kingdom has a strong comparative advantage in pharmacology, clinical medicine, plant and animal science, and neuroscience. In view of both the RCI and RCA, it appears that the United Kingdom is strong in biological and biomedical research, and has absolute strength across a broad range of scientific disciplines. This conclusion accords with a separate recent review of the U.K. science base (12).

Similar analyses for other countries (5) suggest that some smaller European countries (Denmark, Sweden, and Switzerland) have prominence in biomedical research. The Asian economies have prominence in research related to certain industries (such as engineering, computing, chemistry, and materials). Some countries (Australia, Canada, New Zealand, and South Africa) have prominence on research based on natural resources; for others (the scientifically strong United States, Japan, France, and Germany, and the scientifically weak Papua New Guinea and Thailand) no particular pattern of specialization emerges.

A measure of evenness of a country's scientific effort is given by the variance of the distribution of its RCAs (by papers or citations) for the 20 fields specified by the ISI (13). The United Kingdom has the lowest such variance (by papers) or greatest evenness in its patterns of scientific capability among the 20 fields. The United Kingdom is followed by Germany, United States, Netherlands, Switzerland, and France (14). It is not surprising that the countries with larger economies show such relative evenness, but the presence of small countries such as the Netherlands and Switzerland (where many would expect preponderant investment in pharmacologically related research) is somewhat surprising. Most of the Asian countries that are emerging as scientific powers, in contrast, have uneven patterns (for example, Philippines, China, Indonesia, Singapore, Thailand, and South Korea).

Similarly, evenness in quality can be estimated by calculating the variance in the appropriately normalized distribution of a country's RCIs among the 20 fields (15). The greatest evenness in quality, by this measure, is shown by Australia, followed by the United Kingdom, Netherlands, France, Finland, Canada, and United States (16). This and the analysis above and in Table indicate that the distribution of the data for the 20 fields for the United Kingdom is more tightly clustered than for any other country. Whether this is a good thing is another question.

Consideration of a country's share of the world's total papers or citations tends to focus attention on the larger economies. As in the opening analogy with the Olympic Games, we ask about performance in relation to population size, or money spent on basic research. On this basis (Table), the top 12 countries ranked according to papers per person are smaller, mainly northern European countries; the top G7 country is Canada, at fifth, followed by the United Kingdom at eighth and United States at ninth. A similar pattern holds in terms of citations per person; again, France, Germany, Italy, and Japan come low in the rankings. Unlike the case for Olympic medals, there is no high relative performance by a very small country. This is understandable; science has thresholds of critical size and investment.

Table 3.

Measures of relative performance (1, 5). Top 12 countries are listed first for each index; rankings for other countries are in parentheses. Switz., Switzerland; Neth., Netherlands; N. Zealand, New Zealand; paps. papers; cits., citations.

View this table:

Comparison of scientific output relative to government money spent on research and development (R&D; both in total, and excluding defense funds) is arguably the best measure of the cost-effectiveness of spending in support of basic and strategic research (Table, last columns). There is a marked gap—by a factor of 3 or more—between the output, measured in citations per unit of civil expenditure, of the top three G7 countries (United Kingdom, United States, and Canada) and the other four. The United Kingdom does particularly well in these performance ratios: this is partly because the number of citations of papers by U.K. scientists is relatively high and partly because the money it spent on R&D is relatively low. A similar, although slightly smaller, gap is also seen if total government expenditure is used as the scaling factor [and similar patterns are seen if papers, rather than citations, are rescaled against spending (1)].

Patterns of Change

From 1981 to 1994, the world's output of scientific papers increased by 3.7% per year. This rate corresponds to a doubling time of 19 years. The greatest growth rates, >10% per year, were exhibited by the scientifically emerging countries such as Hong Kong, China, Singapore, South Korea, and Taiwan. The scientifically established countries had lower, though still pronounced, annual average growth in publications: United States, 2.7%; United Kingdom, 3.0%; Germany, 3.3%; and France, 5.2%. Similarly, in terms of publications relative to population size (Table), the countries that were already the leaders in 1981 tended to show lower growth.

Such increases in output from newer players have meant that the United States, United Kingdom, and most other scientifically developed Western countries have seen their share of the world's papers decrease somewhat from 1981 to 1994. The United States' share of world papers decreased by 1.0% per year from 1981 to 1994, the United Kingdom's share by 0.9%, and Germany's by 0.4%, whereas France's share increased by 0.7% and Japan's share increased by 2.1%. Overall, however, absolute rankings have not changed much. In terms of shares of the world's total papers, the United States ranked first, United Kingdom second, Japan third, and Germany fourth, both at the beginning and at the end of the 1981–1994 interval. In terms of papers per population, the top 12 countries (Table) were top at both the beginning and end of the 14-year span, with the exception of the Netherlands (which displaced Germany from the top 12) and also with minor reordering in the rankings (chauvinistically, we note that the United States was ahead of the United Kingdom at the start, behind at the end). The U.S. decrease in world share of citations was about 0.8% per year, while the United Kingdom's share decreased 1.1% per year.

Change in the inherent quality of a country's science from 1981 to 1994 might be indicated by changes in the RCI. As shown in Fig. 1, the emerging Asian countries mentioned above tended to show large increases in RCI over the 14-year period but from a relatively low base. Other countries started the period with low RCI values, and fell further back. The countries that rank highest in average RCI (Table), or in citations or papers per person (Table), tended to show little change. The greatest gain among the top 12 countries in Table was by New Zealand, which showed an average annual growth in RCI of 1.4% per year. The greatest declines were for Australia, Denmark, Norway, and Sweden, all of about 0.9% per year. RCI increased at 0.1% per year in the United States and decreased at 0.2% per year in the United Kingdom. In general, there is thus little suggestion of any marked decline in the quality of the scientific output of the top-ranking countries in Table.

Fig. 1.

Change in the quality of scientific output, measured as average percentage change in RCI (9) versus the initial RCI (average 1981–1985), for the larger scientifically developed, and developing, countries. Abbreviations are in Table and: CH, Chile; HK, Hong Kong; ID, Indonesia; MA, Malaysia; MX, Mexico; NZ, New Zealand; NO, Norway; PN, Papua New Guinea; PH, Phillipines; SI, Singapore; SA, South Africa, SK, South Korea; TA, Taiwan; TH, Thailand.

Corresponding analyses of trends in citations need to be interpreted with care, because citations accumulate with time (17). Thus, all else being equal, papers published in the early 1980s (or earlier) should contribute more to citation counts than those published in the early 1990s. But such considerations seem to us unlikely to introduce time delays of more than a couple of years [a supposition supported by the average half-life of citations (17)]. A different worry is that the researchers who did this work were trained 10, 20, or more years earlier. Today's performance may say little about how well the new generations of scientists are being nurtured [see (1) for further discussion].

Another way of assessing a country's science base is to look at its success in winning major international prizes. It is often asserted, for example, that the relative paucity of Nobel prizes won by U.K. scientists over the past two decades is evidence of declining scientific strength. Although still the best known and richest, the Nobel Prize is just one among a growing number of notable awards. Its scope is, moreover, restricted: among the sciences, only physics, chemistry, and medicine (broadly construed as the biomedical end of the life sciences) are recognized.

We thus counted all internationally recognized scientific prizes worth more than $200,000 (United States), along with mathematics' Fields Medal (18), by decade through the 20th century (Fig. 2). Up to the past few decades, the count is essentially solely of Nobels. German scientists won most of the awards in the early decades, and scientists in the United States began to win many awards in the 1930s. In the decades around World War II, proportionally fewer German and French scientists won, while U.S. scientists established a continuing command of around half of the world's prizes. Scientists in the United Kingdom have maintained a steady fraction of about 10% of all awards throughout the century. If this total is rescaled for population size, the United Kingdom has been the leader throughout the century. One bias in this analysis is that one recent large prize, the Australia Prize, has been won largely by Australians. Japan and United States each give two large prizes, and the United Kingdom, Germany, and France none.

Fig. 2.

The figure shows the fraction of the world's major international science prizes (18) won by each of the five largest economies (the G5 countries and Australia), by decade, over the 20th century (1). Order of curves is as listed in the legend.

Analysis of prizes gives a time-delayed measure of performance. This consideration may explain why the G5 countries have continued to win about 80% of the awards over the past three decades, despite the rising performance of new players and the G5 countries' diminishing share of the world's papers and citations.


The above comparisons are to a degree confounded because a large and growing fraction of scientific work involves international collaborations [see discussion in (8)]. In 1994, for example, 26% of papers with first authors in the United Kingdom were the product of transnational collaborations (4). Another concern is that there is an English language bias in the ISI database, both in the journals included and in patterns of citation. Could this explain why the United States, United Kingdom, and Canada do so much better than France, Germany, Italy, and Japan (Table, last columns)? But the same broad patterns of performance among the G7 countries are also seen in the analysis in Table, where the four leaders are not the English-speaking countries.

Despite these concerns, the large differences in performance indicated in Table, last columns, are surprising. My view—and it is no more than a guess—is that a large part of the difference in performance between the top dozen or so countries in Table and the lower ranking of the G7 countries arises from differences in the nature of the institutional settings where the scientific research is done. Germany and France have superb scientists who do outstanding work, but a large proportion do this work in dedicated research institutes: Max Planck and CNRS Institutes. By contrast, most basic research in North America, United Kingdom, the Scandinavian countries, and others among the top countries in Table, is done in universities (19). The nonhierarchical nature of most North American and northern European universities, coupled with the pervasive presence of irreverent young undergraduate and postgraduate students, could be the best environment for productive research. The peace and quiet to focus on a mission in a research institute, undistracted by teaching or other responsibilities, may be a questionable blessing.

I thus suggest that, among the scientifically advanced countries, better value for money (in terms of papers or citations per person or per dollar spent) might be associated with performing basic research mostly in universities, rather than in research institutes. If so, there are significant implications for those countries, such as the United Kingdom, Sweden, and Australia, which recently have seen great enlargement in the number of tertiary institutions designated as universities. It seems unlikely that governments can afford to supply the previously customary level of infrastructure for research (equipment, technicians, research libraries, properly furnished laboratories, and so on) to all departments in these more numerous universities. This raises questions about how to focus such infrastructure support and indirect costs upon the best people and groups. The question is sharpened by the observation that, in the United Kingdom, by the year 2000 half of all scientific papers will have three or more authors, from two or more institutions. I think the alternative of hiving off most fundamental research into dedicated institutes could be a suboptimal solution. This issue deserves further analysis.

To end on a parochial note, I observe that the United Kingdom does well in attracting inward investment (20). It is believed that the strength, and the accessibility, of the science base is a large factor in this success. Anecdotes abound, but a convincing and objective analysis is more difficult. More generally, the acknowledged strength of the science base in the United Kingdom, although it helps create wealth around the world, is not consistently translated into strong industrial performance within the United Kingdom itself. The Technology Foresight enterprise in the United Kingdom currently seeks to remedy this, by forging new connections between the two. But this is another story (21).


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.

Stay Connected to Science

Navigate This Article