Intelligence Science: Reverse Peer Review?

See allHide authors and affiliations

Science  26 Mar 2004:
Vol. 303, Issue 5666, pp. 1945
DOI: 10.1126/science.303.5666.1945

Several weeks before the invasion of Iraq began, this space was occupied by a discussion of “inspection science.” Given the slow application of detection technologies by the inspection team and the prospects for new methods, the status of inspection science seemed to argue for a delay in beginning hostilities, giving us time to answer the question of whether Iraq harbored weapons of mass destruction (WMD). Belatedly, we have the answer: There weren't any. So the new question is, what went wrong? That is a better question than “Who lied?”, the frequent favorite of political finger-pointers. But it invites an examination of what we might call “intelligence science.” Is intelligence a science, and if so, what are its principles?

Well, intelligence as it is best practiced by experienced agents and agencies surely looks like a science to us. A problem is defined so that the right measures for dealing with it can be selected; relevant information is gathered and then analyzed, so that meaningful relationships can be uncovered and understood; and tentative conclusions are reached and then tested against possible alternatives. Finally, perhaps most important, critical review is sought. That sequence is just about what an epidemiologist would follow, for example, when confronting a new influenza strain.

So how did our intelligence scientists seek to answer the WMD question? The first problem is that their assignment was hampered by an unclear definition. WMD are defined in U.S. policy as comprising nuclear, chemical, and biological weapons, but there are vast differences among these. Chemical weapons probably don't deserve the WMD label; biological weapons are highly varied and in no way resemble nuclear weapons. Targets as varied as these may require different data-gathering strategies. For scientists who like clear assignments, this one was difficult from the outset.


Information gathering, the experts emphasize, is often largely a matter of interviewing likely sources or applying detection technology to reveal activities that are otherwise covert. That has been the dominant mode for inspection intelligence systems. As for interviews, former top U.S. weapons inspector David Kay's congressional testimony later revealed that those undertaken before the war had disclosed nothing about the dysfunctional state of weapons programs and their management by Saddam Hussein's government in Iraq. And as for detection technology, what was available at the time had given no hint of the presence of WMD. The analysts could have concluded that perhaps there were none, but it may have been equally reasonable to decide that they were simply well hidden.

The conclusions were developed in a National Intelligence Estimate (NIE) submitted in September of 2002 that was, under the circumstances, appropriately ambiguous with respect to the presence of WMD. An “unclassified” version was publicized in October 2002 amid preparations for war. In July of 2003, after the announced ending of major combat, a “declassified” version of the NIE was released. Those who have seen the longer, classified document say that the July version consists of verbatim excerpts from it. But the October version was different. Fifteen uses of the adjective “probably” that were contained in the official NIE were omitted from it, along with other caveats, and a number of dissenting views were contained in the classified NIE but absent from the October release. The Carnegie Endowment for International Peace has done a careful analysis of the documents, and its Web site (http://www.ceip.org/) has the full texts of both.

The last phase of intelligence science, as we said, would entail critical review of the authors' conclusions. The odd history of the NIE suggests that it endured a critical review rather different from the kind we know as peer review. In the latter process, at least here at Science, qualifications and limitations on scientific conclusions are usually added to the text at the insistence of reviewers, rather than removed. But as the intelligence experts appeared before higher-level reviewers from the Pentagon or the Executive Office, it is clear that qualifying language and caveats were deleted instead, so that conclusions were strengthened. In a scientific setting, we'd call that arguing from the desired conclusion rather than from the data. So for all we know, the intelligence agencies may well have done some real science—until they got to the political level and encountered reverse peer review.

Navigate This Article