Policy Forum

Scholarly Communication: Cultural Contexts, Evolving Models

Science  04 Oct 2013:
Vol. 342, Issue 6154, pp. 80-82
DOI: 10.1126/science.1243622

Abstract

Despite predictions that emerging technologies will transform how research is conducted, disseminated, and rewarded, why do we see so little actual shift in how scholars in the most competitive and aspirant institutions actually disseminate their research? I describe research on faculty values and needs in scholarly communication that confirm a number of conservative tendencies in publishing. These tendencies, influenced by tenure and promotion requirements, as well as disciplinary cultures, have both positive and negative consequences. Rigorous research could inform development of good practices and policies in academic publishing, as well as counter rhetoric concerning the future of peer review and scholarly communication.

Over two decades many have predicted that models of scholarly communication enabled by emerging technologies will transform how research is conducted, disseminated, and rewarded. The transformation would be driven by new generations of scholars weaned on file sharing, digital piracy, Facebook, Twitter, and yet-unrealized social media technologies. Prognostications about the future, however, are often devoid of empirical evidence useful in policy development. How can we achieve realistic publishing economies that serve the needs of scholars, as well as maintain quality in research output and institutional culture? In addition to laudable declarations and manifestos (1, 2), a critical mass of relevant and rigorous research would be welcome. This should include investigations into effective tenure and promotion (T&P) review practices in higher education globally; the role of bibliometrics and other mechanisms, such as open peer-review experiments, in providing a complement to traditional peer review; and ways to create sustainable and accessible publishing models that filter scholarship effectively, reliably, and in a way that cannot be gamed or abused.

Expectations for change in academic publishing have been influenced by pundits in the Internet industry, the open-source movement in computer programming, the rise of Web 2.0 social media tools, and the Wikipedia model of crowd-sourced production [e.g, (3)]. The ArXiv preprint server—used heavily by computationally based, high-paradigm, and low–commercial value fields—was often held up as a one-size-fits-all model that all disciplines could aspire to. “Big data” (4) and cyberinfrastructure (5) have promised unlimited data sharing and reuse, replicability and reproducibility in highly collaborative and distributed research environments. In this context, the journal article or monograph is considered to be a format unconducive to effective transfer of knowledge.

It has been suggested by some that making sense of the information overload resulting from this explosion of data and new publication forms could be ameliorated by new machine-generated algorithms (6), broadly referred to as alt-metrics (7). Adding to this mix is a proliferation of publication forms that take advantage of new media tools and shift the burden of payment for the production of scholarly literature from library subscriptions to authors via the author processing charge [gold open access (OA), e.g., Public Library of Science (PLoS) journals]. Most recently, we have seen a proliferation of gold OA megajournals following a variety of “publish then filter” models (e.g., PLoS One, F1000Research, and SAGE Open) that advertise fast publishing, article-level metrics, and open commenting as an alternative to stringent prepublication peer review by experts in a disciplinary community.

Reality Versus Rhetoric

Box 1

Future research.

Empirical research, especially social science research, can confirm good academic publishing practices and guide the future of scientific communication. We pose the questions below to structure that effort.

Determine primary indicators of effective T&P review practices across institutions and higher-education sectors internationally.

In order to identify successful models of T&P review, we need to understand which institutions rigorously engage in “best practices,” such as conducting "thick reviews" of a candidate’s dossier, limiting the number of publications that can be submitted in a dossier, or ignoring impact factors completely. Which rely too heavily on secondary indicators, such as impact factors?

One of our project’s more distinguished advisors from the biological sciences recommended the importance of senior scholars modeling and rewarding good practices in graduate students, postdocs, and pretenured faculty. How can we as a community of scholars institutionalize this advice?

The proliferation of international rankings presents challenges to researchers who are interested in how T&P practices vary across higher-education sectors and countries. These rankings schemes are being scrutinized for their effect on institutional missions (18). Do we know what are the actual costs (including social and opportunity costs) to teaching-intensive institutions of diverting academic labor from teaching to increasing research output, as measured primarily by publications and related impact factors?

How do research assessment exercises, league tables, and cash incentives to publish in high-impact journals affect the general quality and number of research publications?

Do top-down agendas from scholarly societies [e.g., (1)] or other nonuniversity entities encourage the adoption of good practices? These organizations could track their members’ publishing patterns over time to determine relative effectiveness of the various factors influencing choices. Longitudinal and comparative ethnographies of carefully chosen universities could also provide a window into how, or if, resolutions, declarations, research assessment exercises, and institutional aspirations to advance in league tables are positive or negative levers for substantial change.

What are the effects on the academic enterprise of having some of the largest bibliometrics services controlled by publishers like Elsevier and Thomson Reuters (19)? Is the influence of these organizations on university rankings schemes hijacking a move to best practices by universities?

Assess whether bibliometrics or other mechanisms can evolve to filter scholarship effectively, reliably, and in a way that cannot be easily gamed or abused.

Scientific impact is a multidimensional concept that cannot be adequately measured by any one indicator (20, 21). What relation should the use of new metrics have to the more desirable qualitative “thick” reviews in academic promotion, grant competitions, and university rankings?

Are forms of hybrid quality-assessment models, which include traditional review and alt-metrics, more effective, and do they cost less, than the current system? Where is this being applied effectively? Are there models of successful crowd-sourced or alt-metric "peer review" applied to large data sets, and can they be applied outside of those cases as measures of quality (and be recognized in T&P decisions)?

Investigate ways to finance high-quality publication models while preserving the important work of most scholarly societies.

The majority of our informants made clear that scholarly societies are the natural communities of peers in a discipline and have played an important role in managing peer review and quality on multiple levels. Publication subscriptions are a key element of their operating budgets.

Will gold OA policies, such as those recommended in the UK Finch report (22), and the rise of gold OA megajournals be a positive development for the Academy, or do they represent vanity publishing that shifts costs onto authors? Will articles in these outlets be weighted as heavily in T&P decisions as the more traditional outlets and on what basis?

What new or existing financial publishing models can fund the activities of scholarly societies, and in what disciplines, while also increasing access to published scholarship? In response to calls for open access to federally funded research, a number of societies are implementing gold OA journals [e.g., (23)]. It would be useful to survey authors in various higher-education sectors and disciplines on how their promotions were affected by their publication choices in such outlets.

Opt-out OA resolutions at a variety of top-ranked institutions implicitly or explicitly recommend that all kinds of scholarship be considered in T&P decisions. Harvard and MIT, the first movers in this space, might systematically track shifts in publishing behavior of their faculty, especially among younger scholars.

Do we know if most readers want publications bursting with embedded data and linked commentary (with possibly exorbitant production costs), or smaller slices of curated scholarship, as represented in traditional publication formats? Surveys with good response rates and discipline-based ethnographic studies could help answer this and the other questions posed above.

Given this environment, why do we see so little actual shift in how scholars in the most competitive institutions, and aspirant institutions around the globe, actually disseminate their research at all stages? Our empirical research between 2005 and 2011 on cultural norms and disciplinary values in 12 disciplines (8, 9) revealed that individual imperatives for career self-interest, advancing the field, and receiving credit are often more powerful motivators in publishing decisions than the technological affordances of new media. There is an extraordinary reliance on peer-reviewed publications to aid T&P committees and external reviewers in the evaluation of scholarly work. Although we have heard many complaints about peer review, publication imprimatur—with its associated peer review—was seen as an important proxy for assessing scholarly impact and importance. This reliance flows from the exponential growth, and resulting compartmentalization, of knowledge across the Academy and has the unwelcome result that individuals within a faculty can often no longer practically review the work of their peers or choose not to.

When compared with more ephemeral and lightly peer-reviewed “in-progress” research communications, archival peer-reviewed publications in established outlets have the heaviest weight in institutional evaluation, carry the highest prestige when making decisions about where to publish one’s own best research results, and are relied on as valuable quality filters for the proliferating mass of scholarly information available on the web.

Although the T&P process allows for disciplinary differences in type of scholarly product, a stellar record of high-impact publications continues to be the primary criterion for judging a successful scholar in the institutional peer-review process. Consequently, scholars choose outlets to publish their most important work based on three factors: (i) prestige (perceptions of rigor in peer review, selectivity, and “reputation”); (ii) relative speed to publication; and (iii) highest visibility within a target audience. When asked about more in-progress work—such as stand-alone cataloguing or curating, protocols, or ephemeral non–peer reviewed publications–informants said these would be credited, but could not substitute for nor be weighted as heavily as peer-reviewed “interpretive” work, as reflected in well-crafted arguments.

Despite the reliance on imprimatur as a proxy for peer review, most scholars claim that a candidate’s advancement dossier should be read closely and carefully by peers at the scholar’s home institution and that the advancement process at their institution can and should be supportive of (and not prejudiced by) nontraditional publishing models, provided that peer review is strongly embedded and that there is evidence that the work is “ground-breaking” and “moves the field forward.”

Are committees seeing many examples that deviate from the established norms for a discipline, especially from young scholars? Not according to our informants, who represented all career stages. Evidence to suggest that “tech-savvy” young graduate students, postdoctoral scholars, and assistant professors are bypassing traditional publishing practices is not overwhelming. Young scholars are products of intense socialization into an increasingly competitive academic working environment, reflected in the remarkably consistent advice given to pretenure scholars across fields: focus on publishing in the right venues and avoid spending too much time on competing activities. One would expect them to mimic the norms of their discipline, and they do so by following the lead of their mentors, even those established scholars who themselves exercise more freedom in publication choices but counsel conservative behavior in pretenure scholars.

The pace of archival publication in peer-reviewed outlets is growing not subsiding (10, 11). There is intense publication pressure on young scholars generally, as well as scholars at aspirant institutions globally. Imperatives to rise in league tables (academic rankings), in combination with research assessment–type exercises that tie government funding to research output, are significant drivers of this growth (12). The practices of competitive research universities have trickled down to aspirant institutions worldwide and translate into a growing glut of publications and publication outlets, many of them low quality. This proliferation of outlets has placed a premium on distinguishing prestige outlets from those that are viewed as less stringently refereed, contradicting predictions that current models of publishing and of rating impact are going to be overturned in the foreseeable future.

How does this reality jibe with the hyperbole about Web 2.0 practices as transformative? We were particularly interested in questions of sharing research results. Although there was variation based on personality, our informants were clear that sharing early-stage research before it reaches a certain level of excellence is not widespread nor desirable. Working papers from scholars in top-tier institutions are unheard of in highly competitive fields, such as chemistry or molecular biology, both characterized by large grant funding, commercial potential of research, an extant quick turnaround time to publication, a surfeit of publications and outlets, and an overload of (or risks associated with relying upon) unvetted material. For astrophysicists and economists, outlets, such as arXiv and Social Science Research Network (SSRN), respectively, despite their utility as repositories for posting working papers and more ephemeral publications, do not replace the importance of formal archival publication because working papers are not recognized as, and were not necessarily intended to be, equivalent currency in T&P evaluations [e.g., (13)].

It is questionable whether new forms of scholarship outside of the traditional peer reviewed formats, including large data sets, will easily receive equivalent institutional credit. These outputs must be peer reviewed—and must be accompanied by an “interpretive” argument to receive such credit.

Who will conduct peer review of alternative forms of scholarship in an environment where publishing has become an “inflationary currency”? There is a near-continuous expenditure of activity on peer review by most scholars; the system is overloaded. As examples, two journals (14, 15) ceased publication of supplementary data because reviewers cannot spend the time necessary to review that material closely, and critical information on data or methods needed by readers can be lost in a giant, time-consuming “data dump.”

Will new forms of alt-metrics—that can “watch social media sites, newspapers, government policy documents and other sources for mentions of scholarly articles” (7)—provide reliable filters? It is too soon to tell how such tools might be valued in T&P and whether they can overcome the problem of being easily gamed. Could “open peer review,” where commentary is openly solicited and shared by random readers, colleagues, and sometimes editor-invited reviewers, be a panacea? We are not aware of any compelling evidence to suggest that a critical mass of scholars is offering comments in open journal forums, such as PLoS publications, probably for most of the same reasons identified in 2006 (16). Scholars usually read something once to determine its usefulness; given a choice, the version they will want to read will be the final, peer-reviewed one.

Research and Policy to Support Scholars

Although some might lament the conservative tendencies we describe, or offer only the promise of technical solutions to bring about cultural change, many of these tradition-driven behaviors serve important purposes in the life of scholarship. We should question whether jettisoning them wholesale could work against many of the values that most scholars agree are essential to good scholarship. Most scholars, however much they embrace the possibilities of new technologies in their research, are plagued with a lack of time; are in need of more, not fewer, filters and curated sources; and require a modicum of privacy and control over their in-progress research until it is ready for widespread public exposure. The drives to expose early ideas publicly, to rely on crowds for content creation and peer review, and to engage constantly with technology may not conduce the best scholarship.

How might we preserve the good and move away from some of the more negative parts of the current advancement system? Rather than demand unrealistic publication requirements of their members and give too much weight to impact factors and imprimatur, higher-education sectors globally should conduct thorough and context-appropriate internal institutional peer review of their members’ work, at the center of which should be a close reading and evaluation of a scholar’s overall contributions to a field in the context of each institution’s primary mission (1, 17). Even though some institutions have or may be considering policies that limit the number of publications that can be submitted in promotion cases, refuse to refer to a journal’s impact factor in assessing quality, or encourage members to publish less but more meaningfully, it is not clear, beyond the anecdotal, if these policies have any significant influence on actual behaviors.

Rigorous empirical research (see Box 1), especially from the social sciences, could inform the development of good practices in the academic publishing environment, as well as counter a surplus of rhetoric concerning the future of peer review and scholarly communication in digital environments.

References and Notes

  1. The core of research consisted of structured interviews with 160 faculty, administrators, librarians, and publishers across more than 45 “elite” research institutions largely in North America (and some in Western Europe), in over 12 disciplines. These interviews covered a variety of topics including tenure and promotion, sharing and publication, collaboration, data and resource use, and public engagement. Individuals were chosen through convenience sampling and a quota-informed system of snowball sampling to ensure that the informant pool represented a diversity of career stage and experience. The project resulted in multiple publications, offering comprehensive descriptions and analyses across much of the scholarly communication spectrum, including an in-depth investigation of peer review. These publications are available at The Future of Scholarly Communication. See http://cshe.berkeley.edu/research/scholarlycommunication/index.htm.
  2. Acknowledgments: The work reported in this article was funded by the Andrew W. Mellon Foundation. D.H. thanks collaborators S. K. Acord, and C. J. King for comments on an earlier draft, as well as other team members, advisors, and anonymous informants.
View Abstract

Subjects

Navigate This Article