A debate held at the annual Charleston Library Conference tackles the journal impact factor, with speakers looking at the metric and analyzing if it does more harm than good. The debate was moderated by Rick Anderson, Associate Dean for Collections & Scholarly Communication; and argued by Sara Rouhi, director of business development at Altmetric; and Ann Beynonn, manager at Clarivate Analytics.

A journal’s impact factor is a long-established metric intended to evaluate the relevancy of a publication by factoring the average number of times its articles were cited over the course of the prior two years. However, the metric does not reflect journals that continue to have impact long after the two year time-span.

Opening polls of the debate showed that 54 percent of all respondents believed that the impact factor does more harm than good. By the end of the debate, that number had grown to 57 percent. However, because the debate garnered a small number of attendees, the vote does not represent a true statistical significance.

Read full transcripts here.

Google ScholarA journal’s impact factor looks at the number of citations within a particular year, but the significance of some research exceeds a one year time frame. To highlight these papers, Google Scholar released their Classic Papers collection, which highlights highly-cited papers that have stood the test of time.

“This release of classic papers consists of articles that were published in 2006 and is based on our index as it was in May 2017,” Sean Henderson, software engineer at Google Scholar, said in a release. “The list of classic papers includes articles that presented new research. It specifically excludes review articles, introductory articles, editorials, guidelines, commentaries, etc. It also excludes articles with fewer than 20 citations and, for now, is limited to articles written in English.”

In the category of electrochemistry, works by ECS members Gleb Yushin, Christopher Johnson, Yuri Gogotsi, and Bernard Tribollet made the list.

Additionally, Michael Graetzel’s 2006 paper published in the Journal of The Electrochemical Society (JES), “Highly Efficient Dye-Sensitized Solar Cells Based on Carbon Black Counter Electrodes,” claimed the number eight spot.

“A journal from a professional society like ECS will look at the value of the science as the value of the science and not necessarily what its pizzazz is at that particular time,” Robert Savinell, editor of JES, told ECS in a recent podcast. “I think that’s one of the reasons we have this 10 year impact factor that’s at the top of the list. We’re looking at quality of the science in the long term.”

By: Ellen Finnie

Scholarly researchNature announced on December 8 that Elsevier has launched a new journal quality index, called CiteScore, which will be based on Elsevier’s Scopus citation database and will compete with the longstanding and influential Journal Impact Factor (IF).

Conflict of interest

One can hardly fault Elsevier for producing this metric, which is well positioned to compete with the Impact Factor. But for researchers and librarians, there are serious concerns about CiteScore. Having a for-profit entity that is also a journal publisher in charge of a journal publication metric creates a conflict of interest, and is inherently problematic. The eigenfactor team Carl T. Bergstrom and Jevin West have done some early analysis of how Elsevier journals tend to rank via CiteScore versus the Impact Factor, and conclude that “Elsevier journals are getting just over a 25% boost relative to what we would expect given their Impact Factor scores.” Looking at journals other than Nature journals – which take quite a hit under the CiteScore because of what Phil Davis refers to as Citescore’s “overt biases against journals that publish a lot of front-matter” — Elsevier journals still get a boost (15%) in comparison with Impact Factor.

Perpetuating problems of journal prestige in promotion and tenure

But more broadly, the appearance of another measure of journal impact reinforces existing problems with the scholarly publishing market, where journal brand as a proxy for research quality drives promotion and tenure decisions. This tying of professional advancement, including grant awards, to publication in a small number of high prestige publications contributes to monopoly power and resulting hyperinflation in the scholarly publishing market. Indeed, I was recently informed by a large commercial journal publisher that a journal’s Impact Factor is a key consideration in setting the price increase for that title—and was the first reason mentioned to justify increases.

(more…)

A recently published article in Science discusses findings from a study done on the Thomson Reuters Journal Impact Factor (JIF).

The study concluded that “the [JIF] citation distributions are so skewed that up to 75% of the articles in any given journal had lower citation counts than the journal’s average number.”

The impact factor, which has been used as a measurement tool by authors and institutions to help decided everything from tenure to allocation of grant dollars, has come under much criticism in the past few years. One problem associate with impact factors, as discussed in the Science article, is how the number is calculated and can be misrepresented.

Essentially, the impact factor of a journal is the average number of times the journal’s article is cited over the past two years. However, this number becomes skewed when a very small handful of papers get huge citation numbers, while the majority of papers published get low or no citations. The study argues that because of this, the impact factor is not necessarily a reliable predictive measure of citations.

The second problem discussed in the study is the lack of transparency associated with the calculation methods deployed by Thomson Reuters.

But, no matter what happens with the JIF, as David Smith, academic publishing expert, says in the article, the true problem isn’t with the JIF, it’s “the way we thing about scholarly progress that needs work. Efforts and activity in open science can lead the way in the work.”

Learn more about ECS’s commitment to open access and the Society’s Free the Science initiative: a business-model changing initiative that will make our research freely available to all readers, while remaining free for authors to publish.

UPDATE: Thomson Reuters announced on July 11 in a press release that the company will sell its Intellectual Property & Science business to Onex and Baring Asia for $3.55 billion. Learn more about this development.

Disingenuous Scientometrics

The following is an article from the latest issue of Interface by co-editor Vijay Ramani.

The precise definition of the “impact” of a research product (e.g. publication) varies significantly among disciplines, and even among individuals within a given discipline. While some may recognize scholarly impact as paramount, others may emphasize the economic impact, the broad societal impact, or some combination therein. Given that the timeframe across which said impact is assessed can also vary substantially, it is safe to say that no formula exists that will yield a standardized and reproducible measure. The difficulties inherent in truly assessing research impact appear to be matched only by the convenience of the numerous flawed metrics that are currently in vogue among those doing the assessing.

Needless to say, many of these metrics are used outside the context for which they were originally developed. In using these measures, we are essentially sacrificing rigor and accuracy in favor of convenience (alas, a tradeoff that far too many in the community are willing to make!).

(more…)