Technical Comments

Government Funding of Research and Development

Science  31 Oct 1997:
Vol. 278, Issue 5339, pp. 878-880
DOI: 10.1126/science.278.5339.878

In a Policy Forum, Robert M. May (1) uses bibliometric data from an Australian benchmarking study (2) to show that the United Kingdom had the most cost-effective science base among G7 countries (3), as measured by citations attracted per million pounds (per £million) spent. In doing so, May, who is the Chief Scientist of the United Kingdom, has established a baseline that raises the profile of quantitative studies in science policy and against which investigators can measure future performance. Part of May's analysis rests on the assumption that there is a relationship between financial investment in research and development (R&D) and scientific impact, as measured by citations to papers published in the peer reviewed serial literature. Although it is reasonable to assume that there is such a relationship, we have two main concerns with May's analysis. First, he does not allow for a time lag between expenditure on R&D and the evaluation of a country's scientific impact. By taking this into account, we demonstrate that United Kingdom (U.K.) science is not the most cost-effective of the G7. Second, we argue that it is incorrect to relate the citation performance of the national science system only to government expenditure, because private and overseas funders have made an increasingly large contribution to public domain U.K. science in recent years.

May estimates (1) return on investment for a single year (1991) based on the yearly average number of citations over the period 1981–94 (4). However, expenditure in 1991 will have little effect on citations before about 1997 because there is commonly a 4-year lag before papers emerge from the funded research and at least a further 2-year period before the citation peak is reached. The preferred analysis would be to compare expenditure figures for each year with citations achieved (say, 4 to 6 years later) and to track this over time. Because May takes average citation data as the numerator and a single funding year as the denominator, his analysis also becomes sensitive to the funding year selected. There is no “correct” funding year that one can use, but we would argue that an earlier year than 1991 would be better (5). We have therefore looked at government expenditures in the G7 countries for a range of earlier years (Table 1) (6) and compared these with May's citation data to calculate Apparent Cost Effectiveness (citations per £million).

Table 1

Government funding of R&D for civil objectives in real terms (adjusted to 1991 prices in £billion) and, in italics, as a percentage of gross expenditure on R&D (GERD), 1986–91, for the G7 countries (4).

View this table:

Our sensitivity analysis (Fig. 1) shows that the United Kingdom moved into first position ahead of the United States only if expenditure in 1991 is used as the denominator. This is because, in contrast to the situation in other G7 countries, U.K. government civil expenditure on R&D has been steadily falling in real terms (7). This might have been expected to have led to a decline in U.K. scientific output. In practice, the reduction in government expenditure has largely been made up by an increase in funding from other sources, including private nonprofit funders such as charities, and the part of spending by industry and foreign sources that is for research in the public domain (that is, which leads to research publications in the open literature).

Figure 1

Apparent cost effectivness of the G7 countries' scientific output, as measured by the number of citations per year for 1981–94 per £million of government civil expenditure. As the numerator is constant, the cost effectiveness ratios are sensitive to the year of expenditure: A decline in government civil expenditure will result in an increase in apparent cost effectiveness, as illustrated for the United Kingdom.

The importance of nongovernment funders varies with the field of science. In particle physics, government support dominates, while in biomedical science, private funders play a bigger role. Biomedical science represents one of the United Kingdom's great strengths, with outputs increasing from 23,354 papers in 1988 to 29,391 papers in 1994 (8). Research charities have played a key role in the growth in numbers of research papers in this field. Over the period 1988–94, there was a 60% increase in the number of funding acknowledgements to U.K.-based charities in that country's biomedical literature. By 1994 charities were the acknowledged funding source on about 25% of national U.K. biomedical science publications. In comparison, the growth in the number of papers acknowledging support from the main government funding agency, the Medical Research Council, was only 5% over the same period. Despite this growth in charity output, the United Kingdom's world share of publications in biomedicine has declined marginally (9), mainly because of more rapid growth in other countries. Without the contributions of charities, it is clear that the United Kingdom's international position in biomedical science would be worse than it is now.

It is doubtful that the increase in private sources of funding is sustainable indefinitely. So, if government funding continues to decline, it is likely that there will be a reduction in the United Kingdom's gross scientific output and impact. Given the evidence that a strong, locally funded science base is important for technology innovation in industry (10), our results would argue for a sustained or increased commitment by government funding to science in the United Kingdom.

REFERENCES AND NOTES

Response: First, in that part of my Policy Forum which focused on the ratio of outputs (measured by papers or citations) to inputs (in terms of funding), I emphasised the marked—more than twofold—differences between such ratios for the United States, United Kingdom, and Canada (along with other countries such as Switzerland and Sweden) compared with the other four of the G7 nations (1). Although readers may have misperceived my message as one of U.K. chauvinism, the intent was to air some tentative speculations on the reasons for these differences in “cost effectiveness,” which have sparked discussion and controversy (2). The main feature of figure 1 in the comment by Grant and Lewison is to confirm that these marked differences among the G7 have persisted over the extended interval 1986–91; the gap is so marked that the key to the figure is inserted between the top three lines and bottom four.

Second, Grant and Lewison correctly observe that my “output/input” calculations are rough, in two respects: Government civil expenditures on R&D are an unsatisfactorily coarse measure of what produces the output of the “science base”; and there are time lags between inputs of relevant funding and outputs of papers or, even more, citations. But, having made these telling points, Grant and Lewison do not pursue them. Instead, they repeat my rough calculation, dividing the average citations over the span 1981–94 by total government civil R&D spends for a range of years, with the conclusions noted above.

Taking on board Grant and Lewison's constructive criticisms, I have made better estimates of “output/input” ratios (Tables11 and 12). The first problem is how to measure “input.” Government civil spend on R&D includes both too much and too little: too much in that it includes money spent on R&D to underpin policy, which does not typically lead to publications as might be counted by the Institute for Scientific Information (ISI); too little in that support for the “science base” from charities and “business enterprise” (industry, and so forth) are omitted. But consistently collected data about appropriate expenditure are not easily compiled. The Organization for Economic Cooperation and Development (OECD) publishes statistics for expenditure on R&D performed in higher education (HERD) (3), which is probably a better measure of “input” than total government civil R&D spend; but although roughly 80% of the U.K. scientific papers come from universities (including teaching hospitals), such patterns vary from country to country. A better “input” measure is arguably the total “science base” expenditure on R&D (SBRD), defined as all R&D carried out in universities and nonprofit making institutions, irrespective of funding sources, including intramural government funded civil R&D, mostly at government research establishments. The U.K. Office of Science and Technology has put together estimates of such expenditure for several countries, drawn from published OECD statistics on the breakdown of national gross expenditure on R&D (GERD) data, but these numbers arguably also suffer from problems of comparability owing to national structural differences, although possibly less so in aggregate than the OECD HERD numbers (4). The second, and easier, problem is how to take account of time lags between inputs and outputs. If the output is scientific papers, I divide the total papers (5) published in any one year by the input (HERD, or SBRD, or total government civil spend) 3 or 4 years earlier. If the output is citations, then one correspondingly counts citations to the papers in question.

Table 11

Papers per £million spent. Ratio between output of papers (5) to input of SBRD or HERD expenditure 3 years earlier (3, 4). Ratios are given at two time points: 1993/90 and 1996/93.

View this table:
Table 12

Citations per £million spent. Similar to Table 11, except the outputs are citations to papers published in 1993 and 1996. The latter have, on average, many fewer citations.

View this table:

I have calculated (Table 11) the ratios between outputs of scientific papers, in 1993 and 1996, to inputs of HERD or SBRD expenditures 3 years earlier, for an admittedly arbitrarily chosen set of countries (the G7 plus Switzerland, Sweden, and the Netherlands). The output in citations to the papers published in 1993 and in 1996 is also presented (Table 12) (obviously, 1996 papers have on average accumulated fewer citations than 1993 ones). The patterns in these two tables are striking and fairly stable over the 3-year interval of inputs, 1990–93.

The second part of the comment by Grant and Lewison underlines the important contribution made by charities to the U.K. “science base,” especially in biomedical areas. I strongly endorse their views. In the U.K. in 1993, funding from the Wellcome Trust and other charities accounted for roughly 10% of the total “science base” income, a higher proportion than for any other country listed (Tables 11and 12). Similarly, business enterprise support also accounts for about 10% of the total U.K. “science base” expenditure, again a higher proportion than for other countries listed (Tables 11 and 12).

Any attempt to compare output/input ratios among countries, whether by scientific papers or by citations, is bedeviled by all manner of difficulties in compiling truly comparable statistics and by ineluctable biases inherent in counting papers or, even more, citations (6). Even so, some of the differences in such crude measures of the cost efficiency of research among the G7 and other countries (Tables 11 and 12) are so large as to defy explanation as statistical artifacts. These patterns deserve more attention than they have so far received (7).

REFERENCE AND NOTES

Related Content

Navigate This Article