Impact Factors of Supply Chain Management Journals
Note: The following text refers to a previous version of the JCR impact factors. A more recent version are the 2018 JCR impact factors (see there).
Few days ago, Thomson Reuters published the 2015 impact factors of well-known management journals as part of their Journal Citation Reports. Two SCM-related journals have an impact factor of 4 or larger: Journal of Supply Chain Management and Journal of Operations Management. Two other journals have an impact factor between 2.5 and 3: Supply Chain Management: An International Journal and Journal of Purchasing & Supply Management. Journals with an impact factor between 2 and 2.5 are: Journal of Business Logistics, Transportation Research Part E, International Journal of Operations & Production Management, International Journal of Physical Distribution & Logistics Management. Journals with an impact factor between 1.5 and 2 are: Manufacturing & Service Operations Management and Production and Operations Management. Among the journals with an impact factor between 1 and 1.5 is: Decision Sciences. Journals with an impact factor below 1 are: International Journal of Logistics: Research & Applications, International Journal of Logistics Management and Interfaces. However, keep in mind that journal rankings have a downside and should not be the dominating criteria for judging the value of our research. Financial Times has recently decided to include M&SOM rather than JSCM on the FT journal list which indicates that their list is not reliable at all to make SCM faculty decisions. Qualitative rankings such as VHB-JOURQUAL can be a good supplement to quantitative impact factors.
7 responses to “Impact Factors of Supply Chain Management Journals”
Trackbacks / Pingbacks
- 2016-06-26 -
- 2016-06-26 -
- 2016-06-26 -
- 2017-10-29 -
Thank you Professor for the update. Given the increase in the number of researchers in the area of supply chain management, this information is very useful.
Andreas, thank you for the very interesting update.
I agree with you that the (quantitative) evaluation of our work that is achieved by considering the list of our papers, weighted by the impact factors of the journals that published them, and maybe also by the number of citations they receive, can supply only a partial view of our work’s value.
Yet, this is strictly necessary as a filtering criterion, in order to drop from the list those researchers that do not stand up with minimal publication standards.
Once you have a (short)list of researchers that fulfil those standards, you should look inside the publication list, and consider more qualitative and inherent criteria, such as:
– how well each person’s topics fit with the position that you want to fill
– which kind of research each candidate is more in (e.g. quantitative, empirical, models, etc.)
– which was each candidate’s role within each publication (main investigator / co-worker, ..)
Impact factors might measure the quality of an AVERAGE paper published in a journal. However, are they reliably measuring the quality of an INDIVIDUAL paper published in that journal? I believe that the answer is: “No.” If a journal published 30 articles last year the contribution of an individual paper is 1/30 and we all know that the quality of papers can differ quite substantially from paper to paper. Moreover, are impact factors really valid in measuring quality? They measure how often an article has been cited. I am not sure whether that is a good proxy of quality. More importantly, should we create incentives to publish in journals where everybody wants to publish? Paradigm shifts often start in low-ranked journals! Most importantly, CAN academic quality be measured at all, just like “percentage of quality defects in a t-shirt plant”? Often we will know only after 20 years or so whether an idea was great and successful or not. A selection committee should always look at the general picture to assess whether a candidate fits well and do so in a qualitative way. I would strongly oppose attempts to introduce a simplistic system that is based on a “list of our papers, weighted by the impact factors”. Academic deserves a system that is based on trust rather than control. Yes, there will be few free-riders, but the people I regularly meet in academia possess an inner driver that makes control unnecessary.