2017 Key Metrics for ECS Journals

(Click to enlarge image)

The journal impact factors (JIFs) for the ECS journals continue to grow, as evidenced by the data recently released by Clarivate Analytics. For the 2017 reporting year, the ECS journals continue to be among the top-ranked journals. Journal of The Electrochemical Society (JES) is in the top two for Materials Science, Coatings, and Films; and in the top ten for Electrochemistry. The JIFs are published in Journal Citation Reports (JCR) and are just one metric used to gauge the quality of a large number of scholarly journals.

(more…)

A debate held at the annual Charleston Library Conference tackles the journal impact factor, with speakers looking at the metric and analyzing if it does more harm than good. The debate was moderated by Rick Anderson, Associate Dean for Collections & Scholarly Communication; and argued by Sara Rouhi, director of business development at Altmetric; and Ann Beynonn, manager at Clarivate Analytics.

A journal’s impact factor is a long-established metric intended to evaluate the relevancy of a publication by factoring the average number of times its articles were cited over the course of the prior two years. However, the metric does not reflect journals that continue to have impact long after the two year time-span.

Opening polls of the debate showed that 54 percent of all respondents believed that the impact factor does more harm than good. By the end of the debate, that number had grown to 57 percent. However, because the debate garnered a small number of attendees, the vote does not represent a true statistical significance.

Read full transcripts here.

By: Ellen Finnie

Scholarly researchNature announced on December 8 that Elsevier has launched a new journal quality index, called CiteScore, which will be based on Elsevier’s Scopus citation database and will compete with the longstanding and influential Journal Impact Factor (IF).

Conflict of interest

One can hardly fault Elsevier for producing this metric, which is well positioned to compete with the Impact Factor. But for researchers and librarians, there are serious concerns about CiteScore. Having a for-profit entity that is also a journal publisher in charge of a journal publication metric creates a conflict of interest, and is inherently problematic. The eigenfactor team Carl T. Bergstrom and Jevin West have done some early analysis of how Elsevier journals tend to rank via CiteScore versus the Impact Factor, and conclude that “Elsevier journals are getting just over a 25% boost relative to what we would expect given their Impact Factor scores.” Looking at journals other than Nature journals – which take quite a hit under the CiteScore because of what Phil Davis refers to as Citescore’s “overt biases against journals that publish a lot of front-matter” — Elsevier journals still get a boost (15%) in comparison with Impact Factor.

Perpetuating problems of journal prestige in promotion and tenure

But more broadly, the appearance of another measure of journal impact reinforces existing problems with the scholarly publishing market, where journal brand as a proxy for research quality drives promotion and tenure decisions. This tying of professional advancement, including grant awards, to publication in a small number of high prestige publications contributes to monopoly power and resulting hyperinflation in the scholarly publishing market. Indeed, I was recently informed by a large commercial journal publisher that a journal’s Impact Factor is a key consideration in setting the price increase for that title—and was the first reason mentioned to justify increases.

(more…)

A recently published article in Science discusses findings from a study done on the Thomson Reuters Journal Impact Factor (JIF).

The study concluded that “the [JIF] citation distributions are so skewed that up to 75% of the articles in any given journal had lower citation counts than the journal’s average number.”

The impact factor, which has been used as a measurement tool by authors and institutions to help decided everything from tenure to allocation of grant dollars, has come under much criticism in the past few years. One problem associate with impact factors, as discussed in the Science article, is how the number is calculated and can be misrepresented.

Essentially, the impact factor of a journal is the average number of times the journal’s article is cited over the past two years. However, this number becomes skewed when a very small handful of papers get huge citation numbers, while the majority of papers published get low or no citations. The study argues that because of this, the impact factor is not necessarily a reliable predictive measure of citations.

The second problem discussed in the study is the lack of transparency associated with the calculation methods deployed by Thomson Reuters.

But, no matter what happens with the JIF, as David Smith, academic publishing expert, says in the article, the true problem isn’t with the JIF, it’s “the way we thing about scholarly progress that needs work. Efforts and activity in open science can lead the way in the work.”

Learn more about ECS’s commitment to open access and the Society’s Free the Science initiative: a business-model changing initiative that will make our research freely available to all readers, while remaining free for authors to publish.

UPDATE: Thomson Reuters announced on July 11 in a press release that the company will sell its Intellectual Property & Science business to Onex and Baring Asia for $3.55 billion. Learn more about this development.

Disingenuous Scientometrics

The following is an article from the latest issue of Interface by co-editor Vijay Ramani.

The precise definition of the “impact” of a research product (e.g. publication) varies significantly among disciplines, and even among individuals within a given discipline. While some may recognize scholarly impact as paramount, others may emphasize the economic impact, the broad societal impact, or some combination therein. Given that the timeframe across which said impact is assessed can also vary substantially, it is safe to say that no formula exists that will yield a standardized and reproducible measure. The difficulties inherent in truly assessing research impact appear to be matched only by the convenience of the numerous flawed metrics that are currently in vogue among those doing the assessing.

Needless to say, many of these metrics are used outside the context for which they were originally developed. In using these measures, we are essentially sacrificing rigor and accuracy in favor of convenience (alas, a tradeoff that far too many in the community are willing to make!).

(more…)