The Fox Guarding the Henhouse? Or, Why We Don’t Need Another Citation-Based Journal Quality Index

By: Ellen Finnie

Scholarly researchNature announced on December 8 that Elsevier has launched a new journal quality index, called CiteScore, which will be based on Elsevier’s Scopus citation database and will compete with the longstanding and influential Journal Impact Factor (IF).

Conflict of interest

One can hardly fault Elsevier for producing this metric, which is well positioned to compete with the Impact Factor. But for researchers and librarians, there are serious concerns about CiteScore. Having a for-profit entity that is also a journal publisher in charge of a journal publication metric creates a conflict of interest, and is inherently problematic. The eigenfactor team Carl T. Bergstrom and Jevin West have done some early analysis of how Elsevier journals tend to rank via CiteScore versus the Impact Factor, and conclude that “Elsevier journals are getting just over a 25% boost relative to what we would expect given their Impact Factor scores.” Looking at journals other than Nature journals – which take quite a hit under the CiteScore because of what Phil Davis refers to as Citescore’s “overt biases against journals that publish a lot of front-matter” — Elsevier journals still get a boost (15%) in comparison with Impact Factor.

Perpetuating problems of journal prestige in promotion and tenure

But more broadly, the appearance of another measure of journal impact reinforces existing problems with the scholarly publishing market, where journal brand as a proxy for research quality drives promotion and tenure decisions. This tying of professional advancement, including grant awards, to publication in a small number of high prestige publications contributes to monopoly power and resulting hyperinflation in the scholarly publishing market. Indeed, I was recently informed by a large commercial journal publisher that a journal’s Impact Factor is a key consideration in setting the price increase for that title—and was the first reason mentioned to justify increases.

Let’s support an alternative

In an age when we can measure impact at the article level, journal quality measures should play a secondary role, behind article-level quality measures. As Martin Fenner of PLoS notes, journal measures create “perverse incentives,” and “journal-based metrics are now considered a poor performance measure for individual articles.” While traditional tools perpetuate journal prestige, article-level metrics and alternative metrics look at the full impact of an article, including for example downloads; views; inclusion in reference managers and collaboration tools; recommendations (e.g. in Faculty of 1000); and social media sharing. As Fenner also reports, citation based metrics take years to accumulate, and don’t necessarily capture impact in fields with more pragmatic applications of research, such as clinical medicine. Alternative metrics engage with more recent technologies: tweets have been shown in a 2011 study to correlate well with – indeed to predict– citation rates. These metrics can be applied to data sets and other scholarly outputs well beyond the article. Such alternative metrics provide something new: an ability to “measure the distinct concept of social impact,” which, in our era of climate change and global health and social problems, is arguably as important as measuring more purely scholarly impact, by citations alone.

Scholars and librarians have choices about what measures they use when deciding where to publish and which journals to buy. Altmetrics, EBSCO’s PlumAnalytics, and the nonprofit Impactstory provide citation and impact analytics at the article level, and expand the notion of how to measure the impact of research. The team behind Impactstory, funded by NSF and the Alfred P Sloan Foundation, describes their aim this way: they are “helping to build a new scholarly reward system.” While there is convenience — and inertia — in the long-standing practice of using journal citation measures as the key journal quality assessment vehicle, article-level and alternative metrics provide a needed complement to traditional citation analytics, and support flexible, relevant, real-time approaches to evaluating the impact of research. Our dollars and our time would seem to be well spent focusing on these innovations, and moving beyond journal citation-based quality measures as a proxy for article quality and impact.


This article was originally published on In the Open. Read the original article.

DISCLAIMER

All content provided in the ECS blog is for informational purposes only. The opinions and interests expressed here do not necessarily represent ECS's positions or views. ECS makes no representation or warranties about this blog or the accuracy or reliability of the blog. In addition, a link to an outside blog or website does not mean that ECS endorses that blog or website or has responsibility for its content or use.

Post Comments

Your email address will not be published. Required fields are marked *