Something that is long-established in other management disciplines but sadly almost completely neglected in the SCM discipline is research related to sensemaking. In short, sensemaking “involves turning circumstances into a situation that is comprehended explicitly in words and that serves as a springboard into action” (Weick et al., p. 409). Such research is concerned with subjective interpretations rather than objective truth and is therefore better suited to the study of social science phenomena than much of the positivist research we see in contemporary SCM research. Sensemaking is closely associated with the name of Karl E. Weick and his way of analyzing phenomena. Among Weick’s most famous studies is The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster (1993). It could serve as a blueprint for analyzing SCM phenomena. Anyone considering a sensemaking study should read the book Sensemaking in Organizations (Weick, 1995). The article Organizing and the Process of Sensemaking (Weick et al., 2005) gives a very good overview of sensemaking.
Weick, K.E. (1993). The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster. Administrative Science Quarterly, 38(4), 628–652. https://doi.org/10.2307/2393339
Weick, K.E. (1995). Sensemaking in Organizations. SAGE. ISBN 080397177X
Weick, K.E., Sutcliffe, K.M., & Obstfeld, D. (2005). Organizing and the Process of Sensemaking. Organization Science, 16(4), 409–421. https://doi.org/10.1287/orsc.1050.0133
Experiments have exerted a growing methodological influence on the SCM discipline in recent years. In their recently published article on this subject, entitled Experiments in Strategy Research: A Critical Review and Future Research Opportunities, Bolinger et al. (2021) examine and categorize experiments by “[identifying] topic areas in which experiments have been effectively deployed as well as several literature streams that have a limited amount of prior experimental research.” The authors also discuss challenges in using experiments, thereby addressing the level of analysis. SCM researchers should pay particular attention to this aspect, as many of the phenomena they study are not located at the firm level, as in strategy research, but at the supply chain level. The authors argue that their work “documents experimental research and provides a methodological practicum, thereby offering a platform for future experiment-based research in strategic management”. Although the authors review extant experimental work in strategic management, their results are certainly also very useful for SCM researchers.
Bolinger, M. T., Josefy, M. A., Stevenson, R., & Hitt, M. A. (2022). Experiments in Strategy Research: A Critical Review and Future Research Opportunities. Journal of Management, 48(1), 77–113. https://doi.org/10.1177/01492063211044416
The paragraph is probably the most important unit of a well-written academic text. It has a specific structure and standards that make it effective and enjoyable to read. This video demonstrates how to construct good paragraphs and improve writing with better clarity and flow.
There are different types of case-based research methods that differ considerably in their basic assumptions and objectives. An example of such a method is the multi-case theory-building approach, which is based on the work of Kathleen M. Eisenhardt. Her 1989 article, which laid the foundation for this method, has been cited tens of thousands of times to date. Unfortunately, there are countless misconceptions about the method in terms of types of data, number of cases, and performance emphasis. The method is also often overinterpreted as a rigid template, although it was never intended to be such a template. In a new article entitled What Is the Eisenhardt Method, Really?, Eisenhardt now puts her method in a new light and argues that the method’s relatively few defining features enable a wide variety of research possibilities. It should be clear that this new article is important reading for anyone who wants to do research with Eisenhardt’s method and for anyone whose work aims at theory building.
Eisenhardt, K.M. (2021). What Is the Eisenhardt Method, Really? Strategic Organization, 19(1), 147–160. https://doi.org/10.1177/1476127020982866
What distinguishes a good paper? The idea should be creative, the methodological approach should be flawless, and there should be a theoretical contribution. Sure. However, good communication with the reader is at least as important as all of the rest. Unfortunately, very often I have reviewed manuscripts that contain interesting theories, data, and results, but are simply not well-written. As academics we are often busy, but there is one thing we all should do: read a book about academic writing. The reading time is well invested. I have two book recommendations. The first is Natalie Reid (2018), Getting Published in International Journals: Writing Strategies for European Social Scientists. An academic friend of mine once wrote on LinkedIn that this was the best book he had ever read. And it is really good. My favorite chapters deal with “paragraphing” and “constructing and argument, sentence by sentence”. My second recommendation is aimed at German-speaking academics: Gerlinde Mautner (2019), Wissenschaftliches Englisch: Stilsicher Schreiben in Studium und Wissenschaft. This is one of the best books I have ever read.
Reid, N. (2018): Getting Published in International Journals: Writing Strategies for European Social Scientists. Revised Edition. ISBN 0692929959
Mautner, G. (2019): Wissenschaftliches Englisch: Stilsicher Schreiben in Studium und Wissenschaft. 3rd Edition. ISBN 3825252191
The following tool was brought to my attention the other day: Connected Papers, “a visual tool to help researchers and applied scientists find and explore papers relevant to their field of work”. It analyzes thousands of papers, selects the ones with the strongest connections to an entered paper, and generates a graph. In this graph, the tool arranges papers according to their similarity in terms of co-citation and bibliographic coupling. Unlike in a citation tree (e.g., Web of Science), “even papers that do not directly cite each other can be strongly connected and very closely positioned”, which I believe is a very useful alternative to other search strategies. “According to this measure, two papers that have highly overlapping citations and references are presumed to have a higher chance of treating a related subject matter.” With the help of the tool, I was able to identify very exciting papers that I would certainly not have found with other search engines. Connected Papers is self-funded and free.
Our discipline is still almost exclusively shaped by positivism. This is very surprising in view of the very complex social phenomena with which the discipline deals. However, recently I have noticed a (slowly) growing trend toward interpretivism. For example, Darby and her coauthors (2019) have discussed the set of questions interpretive research can address in SCM. Many SCM researchers may still be unsure of how best to conduct an interpretive study. Used to the structured approaches of positivist studies (e.g., Yin), we often would like to have a template in hand that shows us how to conduct an interpretive study. A new article by Mees-Buss and her coauthors (2021) argues that the inductive route to theory that templates (e.g., Gioia) offer do not address the challenges of interpretation. They argue that “a return to a hermeneutic orientation opens the way to more plausible and insightful theories based on interpretive rather than procedural rigor” and they offer “a set of heuristics to guide both researchers and reviewers along this path”.
Mees-Buss, J., Welch, C., & Piekkari, R. (2021), From Templates to Heuristics: How and Why to Move Beyond the Gioia Methodology. Organizational Research Methods, in print. https://doi.org/10.1177/1094428120967716
I was thinking about whether guidelines on how to write a screenplay can teach us how to write an academic article. Here are three ideas I got from the following video: First, both a screenplay and an academic article should be based on a clear story. This story should lead to a finish line that the reader can envision. Second, before presenting the character’s flaws and inner conflicts (in academia: the research gap), a good screenplay must have a set-up that presents the character’s everyday life (in academia: what the discipline has thought so far). Third, the article should develop gradually. In other words: Don’t rush the story.
Emerald has recently announced the winners of their 2018 Emerald Literati Network Awards for Excellence. Numerous SCM-related articles have received outstanding paper or highly commended awards, and might thus serve as excellent articles to read during summer. Several winning articles relate to external (Abdallah, Abdullah & Saleh, Fawcett et al. and Ralston, Richey & Grawe) and internal (Guo et al., Makepeace, Tatham & Wu and Roh et al.) supply chain relationships. Other articles are about risk management (Jahre, Min, Park & Ahn and Oliveira & Handfield) and resilience (Ali, Mahfouz & Arisha and Tukamuhabwa, Stevenson & Busby). Other winning articles deal with sustainability (Busse et al., Dubey, Gunasekaran & Papadopoulos and Ghani et al.), complexity (Gerschberger, Manuj & Freinberger and Sayed, Hendry & Bell), the Internet of things (Haddud et al. and Yan), disruptive innovation (Pérez, Dos Santos & Cambra-Fierro) and the human factor in SCM (Schorsch, Wallenburg & Wieland). Finally, McKinnon‘s article engages in the journal ranking debate, and our own methodological paper, Wieland et al., provides guidelines for scale purification.
Several journals have already reacted to the p value debate. For example, an ASQ essay provides suggestions that not only every editor should read. Another example are the policies published by SMJ: SMJ “will no longer accept papers for publication that report or refer to cut-off levels of statistical significance (p-values)”. Instead, “authors should report either standard errors or exact p-values (without asterisks) or both, and should interpret these values appropriately in the text”. “[T]he discussion could report confidence intervals, explain the standard errors and/or the probability of observing the results in the particular sample, and assess the implications for the research questions or hypotheses tested.” SMJ will also require authors to “explicitly discuss and interpret effect sizes of relevant estimated coefficients”. It might well be that we are currently observing the beginning of the end of null-hypothesis statistical tests. And it might only be a matter of time before other journals, also SCM journals, require authors to remove references to statistical significance and statistical hypothesis testing and, ultimately, to remove p values from their manuscripts.