News FocusEducation

Who Ranks the University Rankers?

+ See all authors and affiliations

Science  24 Aug 2007:
Vol. 317, Issue 5841, pp. 1026-1028
DOI: 10.1126/science.317.5841.1026

Everyone would like to score well in an academic beauty contest. But is it really possible to assess an institution's worth?

Who gets to take credit for Albert Einstein's Nobel Prize? The question seems absurd, but it's important for the reputations of two Berlin universities. The reason: Even Nobels bagged 90 years ago are counted in the “Shanghai ranking,” an influential list of the world's 500 best universities. Both Free University (FU), founded in West Berlin in 1948, and Humboldt University (HU), on the other side of the former Wall, claim to be the heirs of the University of Berlin, the erstwhile home of Albert Einstein and many other Nobelists.

The resulting tug of war has had bizarre results. When the team at Shanghai Jiao Tong University produced its first ranking in 2003, it assigned the prewar Nobels to FU, helping it earn a respectable 95th place. Swayed by protests from the other side of town, the team assigned them to HU in 2004, propelling it to 95th rank and dropping FU by more than 100 places. After FU in turn cried foul—and many e-mails between Germany and China later—the team simply took both universities out of the race. Both are still missing in the 2007 edition, published 3 weeks ago.

The controversy is just one among many in the booming business of university rankings. Invented by the magazine U.S. News & World Report in 1983 as a way to boost sales, these academic beauty contests—called “league tables” in the U.K.—now exist at the national level in a dozen countries; there are a handful of European and global lists as well. Almost all have come under fire from universities, scientists, and, in some cases, fellow rankers.

This year, for instance, presidents of more than 60 liberal arts colleges refused to participate in a key component of the U.S. News & World Report rankings, published last week. The rankings, they wrote, “imply a false precision and authority” and “say nothing or very little about whether students are actually learning at particular colleges or universities.” Last year, 26 Canadian universities revolted against a similar exercise by Maclean's magazine.

The critics take aim not only at the rankings' methodology but also at their undue influence. For instance, some U.K. employers use them in hiring decisions, says Ellen Hazelkorn of the Dublin Institute of Technology, adding that funding organizations, philanthropists, and governments are paying increasing attention as well. France's poor showing in the Shanghai rankings—it had only two universities in the first top 100—helped trigger a national debate about higher education that resulted in a new law, passed last month, giving universities more freedom.

Measuring up

So how do you measure academic excellence? Most rankings start by collecting data about each university that are believed to be indicators of quality. After giving each a different, predetermined “weight,” the indicators are added up to a total score that determines a university's rank. But there are vast differences in the number and the nature of the indicators, as well as the way the data are obtained.

National university rankings cater primarily to aspiring students about to choose where to study, which is why they focus on education. In the U.S. News & World Report ranking of “national universities,” for instance (there are separate lists for many other types of institutions and programs), student retention rates count for 20%, the average amount spent on each student for 10%, and alumni donations, believed to reflect student satisfaction, for 5% (see graph). The University Guide published by the Guardian newspaper in the U.K. has a formula with some of the same indicators, but also a 17% weight on graduates' job prospects.

SOURCES: U.S. NEWS & WORLD REPORT, TIMES HIGHER EDUCATION SUPPLEMENT, SHANGHAI JIAO TONG UNIVERSITY, CWTS

Most international rankings, meanwhile, put a heavy emphasis on research output. That's in part because they are aimed more at policymakers but also because education systems and cultural contexts are so vastly different from country to country that solid and meaningful data are hard to come by. Average spending per student, for instance, doesn't tell you much if you compare China with Germany. Nonetheless, the Times Higher Education Supplement (THES) tries to capture education with a few very simple indicators that it believes to be universally valid: the staff/student ratio and the percentages of students and staff from overseas, regarded as a measure of a school's international cachet.

Ranking education poses another problem: Many rankings rely on universities themselves to provide key data, “which is always a deal with the devil,” says Alex Usher of the Educational Policy Institute Canada in Toronto, who studies rankings. There are documented cases of universities cheating in the U.S. News rankings, for instance, and although U.S. News crosschecks the data with other sources, there are always ways to manipulate them. For example, colleges are known to encourage applications just so they can reject more students, thus boosting their score on the “student selectivity” indicator.

Even more controversial are peer-review surveys, in which academic experts judge institutions. THES, for instance, assigns a whopping 40% to the opinions of more than 3700 academics from around the globe, whereas the judgment of recruiters at international companies is worth another 10%. But when researchers from the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands compared the reviewers' judgments with their own analysis—based on counting citations, an accepted measure of scientific impact—they found no correlation whatsoever. “The result is sufficient to seriously doubt the value of the THES ranking study,” CWTS Director Anthony van Raan wrote in a 2005 paper.

The discrepancy might explain why—to the delight of Australian academics and newspapers—six universities from Australia ended up in the THES top 50 in 2004, wrote Van Raan, who suspected “strong geographical biases” in the review. Martin Ince, a contributing editor who manages the THES ranking, says that the survey has gotten better since 2004 and has a good geographical balance. He believes Australia's strong showing may have been the result of aggressive marketing of its universities in Asia. But he concedes that reputation surveys may favor “big and old universities.”

Peer review is also a major bone of contention in the U.S. News ranking. “We get a list of several hundred institutions, and we're simply asked to rank them on a scale of 1 to 5. That's preposterous,” says Patricia McGuire, president of Trinity University in Washington, D.C., and one of those who boycotted the reputation survey this year. The ranking can't value what her school excels at, she says: providing a degree to mostly minority women from low-income backgrounds.

U.S. News editor Brian Kelly dismisses the boycott's significance. The ranking has always had its detractors, he says, but more than half of university officials still fill out the questionnaire. And the magazine could always find other people to review schools.

Shanghai surprise

The Shanghai ranking avoids all of these problems by eschewing university-provided data and expert reviews. Instead, it uses only publicly available data, such as the number of publications in Nature and Science, the number of Nobel Prizes and Fields Medals won by alumni and staff, and the number of highly cited researchers. The result is a list based almost exclusively on research. Nian Cai Liu, who heads the Institute of Higher Education at Shanghai Jiao Tong University, started the ranking 5 years ago because he wanted to know how Chinese universities were placed in the global pecking order. When colleagues started asking for the data, Liu put them on a no-frills Web site, which now gets thousands of visits a day.

But as the Berlin quarrel shows, the ranking has its own problems. For example, Shanghai credits the institution where the Nobelist worked at the time of the award. And that can make a difference. Andrew Fire's 2006 Nobel in physiology or medicine helped his current institution, Stanford University in Palo Alto, California, move up from third to second place, even though Fire did his groundbreaking work on RNA interference while at the Carnegie Institution in Baltimore, Maryland.

Universities that focus on social sciences or humanities also tend to suffer under the Shanghai system. Recognizing that scientists in those disciplines gravitate to different journals, Liu doesn't count Nature and Science papers and redistributes that 20% share across other indicators. Still, the effect is noticeable: In 2006, the well-respected London School of Economics and Political Science ended up in the 201-300 tier (this far down the list, Liu no longer gives individual ranks), whereas the THES awarded the school 17th place.

Well aware of their influence, and the criticisms, the rankers themselves acknowledge that their charts aren't the last word. U.S. News & World Report, for instance, advises students to take many factors into account when choosing a college. A pop-up window on Liu's Web site warns that “there are still many methodological and technical problems” and urges “cautions” when using the results.

In response to the critics, some rankers are also continuously tinkering with their formulas. But that opens them up to another criticism, namely, that a university can appear to become significantly better or worse in a single year. Many have accused U.S. News of changing its method precisely to shake up the tables and thus boost sales, a charge the magazine rejects.

In part to boost their credibility, the rankers have founded the International Rankings Expert Group (IREG), which in 2006 came up with a set of ranking guidelines. Called the Berlin Principles, they stress factors such as the importance of transparency, picking relevant indicators, and using verified data. Usher, an IREG member, concedes that the principles are quite general in nature because they are the “biggest common denominators” among groups of rankers with very different views. Many rankings aren't fully compliant with the rules yet, says CWTS researcher Henk Moed.

View this table:

U = one university in the top 100

* includes Hong Kong (3 universities)

Who's right? Although both agreed that the U.S. led the list and the U.K. came second, the 2006 Shanghai and THES rankings differed markedly on where Earth's 100 best universities were located.

SOURCE: SHANGHAI JIAO TONG UNIVERSITY, THES

DIY ranking

Some believe the way forward lies with more sophisticated ways of presenting the data. The group in Leiden, for instance, produces rankings of European universities based purely on publication and citation data and presents them as not one but four tables. Each uses a different variable; the yellow ranking, for instance, looks at the total number of papers produced, whereas the green ranking (billed as the “crown indicator”) is based on papers' impact, adjusted so that it doesn't reward bigger institutions or those working in fields in which scientists cite each other more often. The results aren't as simple as a single list, Moed concedes, but they do provide a more complete picture.

Others are going further. The Centre for Higher Education Development (CHE) in Gütersloh assesses German university departments without trying to aggregate them, and the departments are simply slotted into top, middle, and lower tiers. It also allows the user to sort universities based on their own favorite indicators. Obviously, a ranking such as this one makes for less compelling newspaper or magazine copy-like a Miss Universe contest without a winner. But a spokesperson for Die Zeit, the German newspaper that publishes the CHE rankings, says its annual university guide is a bestseller anyway, and the interactive design lures many readers to its Web site.

With all the complaints, it's easy to forget that rankings have benefits as well, says Moed. Competition spurs universities to actually perform better, he says, and the rankings provide students and policymakers with answers—even if they're imperfect—to legitimate questions about quality. Other rankers point out that universities tout the results if they do well, and they don't like being excluded. Perhaps that's why the presidents of the competing Berlin universities announced shortly after the 2007 Shanghai ranking appeared that they would sit down again to discuss the legacy of Einstein and his illustrious colleagues. A compromise might propel both back onto the list in 2008.

Einstein might have appreciated the irony. A sign in his office at Princeton reportedly read: “Not everything that counts can be counted, and not everything that can be counted counts.”

Subjects

Navigate This Article