There has been a recent trend in several management disciplines, including supply chain management, to create knowledge by systematically reviewing available literature. So far, however, our discipline lacked a “gold standard” that guides researchers in this endeavor. The Journal of Supply Chain Management has now published our new article, Durach, Kembro & Wieland (2017): A New Paradigm for Systematic Literature Reviews in Supply Chain Management. Our systematic literature review process follows six steps: (1) develop an initial theoretical framework; (2) develop criteria for determining whether a publication can provide information regarding this framework; (3) identify literature through structured and rigorous searches; (4) conduct theoretically driven selection of literature and a relevance test; (5) develop two data extraction structures, integrate data to refine the theoretical framework, and develop narrative propositions; and (6) explain the refined framework and compare it to the initial assumptions. We believe that these best-practice guidelines, although developed for the SCM discipline, can be used as a blueprint also for adjacent management disciplines.
Durach, C.F., Kembro, J. & Wieland, A. (2017). A New Paradigm for Systematic Literature Reviews in Supply Chain Management. Journal of Supply Chain Management, 53 (4), 67-85. DOI: 10.1111/jscm.12145
I have used Fisher’s (1997) supply chain–product match/mismatch framework (What Is the Right Supply Chain for Your Product?) in my teaching for years! Herein, the author argues that functional products require a physically efficient supply chain strategy, whereas innovative products require a market-responsive supply chain strategy. Fisher’s framework finds empirical support: Wagner et al. (2012) demonstrate that “the higher the supply chain fit, the higher the Return on Assets (ROA) of the firm”. Interestingly, a majority of the firms from their sample achieve a negative misfit, i.e. they target high responsiveness for their supply chain although their products are functional. Extensions of the framework exist, for example by Lee (2002), who adds a “supply” dimension, and more recently Gligor (2017), who argues that “benefits generated by perfect supply chain fit might be offset by the resources deployed to achieve that fit”. Research presented by Perez-Franco et al. (2016) helps to “capture, evaluate and re-formulate the supply chain strategy of a business unit”.
Fisher, M.L. (1997). What Is the Right Supply Chain for Your Product? Harvard Business Review, 75 (2), 105-116.
A colleague recently recommended the following article to me: Mansfield (2003): Spatializing Globalization: A “Geography of Quality” in the Seafood Industry. Herein, the author takes a look at the quality of products in that industry. She challenges “recent perspectives that define quality as an alternative to global, industrial forms of production” and “finds that quality is also important for industrial food production and for the global geography of the surimi [a fish paste] seafood industry”. In general, the author takes an interpretive approach – an approach that is almost absent in SCM research, and that might be inspirational for our otherwise empiricist discipline. Particularly, she employs actor–network theory, which proposes that reality does not exist by nature but is rather constructed through socio-material networks. SCM researchers could learn from such a type of research that (1) theory could be mobilized in many different creative ways; (2) technical supply chain issues are embedded in larger social-political arrangements; (3) geography might inform SCM (theoretically as well as materially); and (4) “quality”, or other concepts, do not exist by nature but are stabilized through networks.
There seems to be a lot of confusion about what theory is. At least this is a recurring question I get from students. Let us first discuss what theory is not: Sutton & Staw (1995) show that “references, data, variables, diagrams, and hypotheses are not theory” and they “explain how each of these five elements can be confused with theory” (p. 371). But we should also be aware of the difference between facts and theory! In his essay, which is part of a collection of six essays, Pagell (in: Boer et al., 2015) paints the picture of an ideal research world where “most research will be building or testing facts, not theory”, while “theory building and testing [will be left] to a much smaller group of papers, where the theoretical argument would be critical” (p. 1244). So, what is theory? A definition I like comes from Suddaby (2015): “[T]heory is simply a way of imposing conceptual order on the empirical complexity of the phenomenal world” (p. 1).
“Scale purification” – the process of eliminating items from multi-item scales – is widespread in empirical research, but studies that critically examine the implications of this process are scarce. In our new article, titled Statistical and Judgmental Criteria for Scale Purification, we (1) discuss the methodological underpinning of scale purification, (2) critically analyze the current state of scale purification in supply chain management (SCM) research, and (3) provide suggestions for advancing the scale purification process. Our research highlights the need for rigorous scale purification decisions based on both statistical and judgmental criteria. We suggest several methodological improvements. Particularly, we present a framework to demonstrate that the justification for scale purification needs to be driven by reliability, validity and parsimony considerations, and that this justification needs to be based on both statistical and judgmental criteria. We believe that our framework and additional suggestions will help to advance the knowledge about scale purification in SCM and adjacent disciplines.
Wieland, A., Durach, C.F., Kembro, J. & Treiblmaier, H. (2017). Statistical and Judgmental Criteria for Scale Purification. Supply Chain Management: An International Journal, 22 (4). DOI: 10.1108/SCM-07-2016-0230
My guest post today comes from Alan McKinnon who for several years has been raising concerns about the academic obsession with journal rankings and low rating of logistics/SCM journals. He has just published a new paper updating his earlier arguments.
In a paper that I wrote five years ago I argued that the development of logistics/supply chain management (SCM) as a discipline was being impaired by the relatively low ranking of specialist journals in this field. I was surprised and heartened by the favourable response I received both from logistics/SCM researchers and academics in other disciplines experiencing a similar problem. I have now returned to the journal ranking debate with a sequel to my original article which reviews recent literature on the subject, analyses new data on the validity of the journal ranking as an indicator of research quality and discusses the recalibration of logistics/SCM journals since 2010/11. The literature challenging the principle, practice and application of journal ranking has been steadily expanding and becoming more critical. Regrettably this is not deterring university managers from basing many recruitment, promotional and resource allocation decisions on the rating of journals. Data generated by the UK government’s assessment of university research (REF) has confirmed that, in the field of business and management, the journal ranking is an unreliable predictor of the quality and impact of an individual journal paper. In this analysis, papers published in lower ranked journals tended to be under-valued, a finding of particular relevance to logistics/SCM journals as they tend to be on the 2nd or 3rd tiers of the major journal lists. Since 2010/11, there has been some overall improvement in the relative standing of these journals, though a couple have been downgraded in the widely-used ABS list. Fortunately the backlash against journal rank “fetishism” has begun with bottom-up campaigns such as DORA and top-down, government-led initiatives in countries such as the UK and Australia aiming to make research assessment fairer, more transparent and more rigorous.
Alan McKinnon is Professor of Logistics in Kühne Logistics University, Hamburg and Professor Emeritus at Heriot-Watt University, Edinburgh. You can find out more about his research and publications at www.alanmckinnon.co.uk and follow him on Twitter @alancmckinnon.
McKinnon, A.C. (2017). Starry-eyed II: The Logistics Journal Ranking Debate Revisited. International Journal of Physical Distribution & Logistics Management, 47 (6). DOI: 10.1108/IJPDLM-02-2017-0097
It is among the common research practices in our field to build a statistical model with a limited set of variables in order to take the lens of a theory – often being alien to our field – on a supply chain phenomenon, and to test this model based on maybe 200 datasets. Other researchers collect data from three or four case companies to build or extend a research model that comprises a small set of propositions. So far so good. “So far so outdated”, I should say if I were to be malicious. Why? Researchers in fields like supply chain management might soon (or already?) be competing with “companies like Google, which have grown up in an era of massively abundant data, [that] don’t have to settle for wrong models”, as the editor in chief of Wired put it already back in 2008, proclaiming The End of Theory. So, is the data deluge about to make our research obsolete? If so, how should our community adapt to this new reality?