2017 Key Metrics for ECS Journals

(Click to enlarge image)

The journal impact factors (JIFs) for the ECS journals continue to grow, as evidenced by the data recently released by Clarivate Analytics. For the 2017 reporting year, the ECS journals continue to be among the top-ranked journals. Journal of The Electrochemical Society (JES) is in the top two for Materials Science, Coatings, and Films; and in the top ten for Electrochemistry. The JIFs are published in Journal Citation Reports (JCR) and are just one metric used to gauge the quality of a large number of scholarly journals.


ECS Journal Impact Factors Rise 8%

The journal impact factors (JIFs) for 2016 have been released, and ECS is pleased to announce that the JIFs for the Journal of The Electrochemical Society (JES) and the ECS Journal of Solid State Science and Technology (JSS) have both risen by 8%.

The JIFs, published in the Journal of Citation Reports (formerly published by Thomson Reuters, now called Clarivate Analytics), are a long-established metric intended to evaluate the relevancy and importance of journals. A journal’s JIF is equivalent to the average number of times its articles were cited over the course of the prior two years.

From 2015 to 2016, the JIF of JES increased from 3.014 to 3.259, and the JIF of JSS climbed from 1.650 to 1.787. These increases mark a continuing trend of growth for both journals.


A recently published article in Science discusses findings from a study done on the Thomson Reuters Journal Impact Factor (JIF).

The study concluded that “the [JIF] citation distributions are so skewed that up to 75% of the articles in any given journal had lower citation counts than the journal’s average number.”

The impact factor, which has been used as a measurement tool by authors and institutions to help decided everything from tenure to allocation of grant dollars, has come under much criticism in the past few years. One problem associate with impact factors, as discussed in the Science article, is how the number is calculated and can be misrepresented.

Essentially, the impact factor of a journal is the average number of times the journal’s article is cited over the past two years. However, this number becomes skewed when a very small handful of papers get huge citation numbers, while the majority of papers published get low or no citations. The study argues that because of this, the impact factor is not necessarily a reliable predictive measure of citations.

The second problem discussed in the study is the lack of transparency associated with the calculation methods deployed by Thomson Reuters.

But, no matter what happens with the JIF, as David Smith, academic publishing expert, says in the article, the true problem isn’t with the JIF, it’s “the way we thing about scholarly progress that needs work. Efforts and activity in open science can lead the way in the work.”

Learn more about ECS’s commitment to open access and the Society’s Free the Science initiative: a business-model changing initiative that will make our research freely available to all readers, while remaining free for authors to publish.

UPDATE: Thomson Reuters announced on July 11 in a press release that the company will sell its Intellectual Property & Science business to Onex and Baring Asia for $3.55 billion. Learn more about this development.

Disingenuous Scientometrics

The following is an article from the latest issue of Interface by co-editor Vijay Ramani.

The precise definition of the “impact” of a research product (e.g. publication) varies significantly among disciplines, and even among individuals within a given discipline. While some may recognize scholarly impact as paramount, others may emphasize the economic impact, the broad societal impact, or some combination therein. Given that the timeframe across which said impact is assessed can also vary substantially, it is safe to say that no formula exists that will yield a standardized and reproducible measure. The difficulties inherent in truly assessing research impact appear to be matched only by the convenience of the numerous flawed metrics that are currently in vogue among those doing the assessing.

Needless to say, many of these metrics are used outside the context for which they were originally developed. In using these measures, we are essentially sacrificing rigor and accuracy in favor of convenience (alas, a tradeoff that far too many in the community are willing to make!).