Revolutionizing the Academic Peer-Review Process
Some time ago, an editorial of Nature Human Behaviour has highlighted that “[the] quest for positive results encourages numerous questionable research practices […] such as HARKing (hypothesizing after the results are known) and P-hacking (collecting or selecting data or statistical analyses until non-significant results become significant)”. To counteract these very serious problems, that make theory-testing research almost useless, the journal has adopted the registered report format, which “shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them”. Similarly, the European Journal of Personality has recently announced to support the registered report format, too: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.” I can only hope that SCM journals will quickly catch up with this development in other fields.
Report Confidence Intervals and Effect Sizes, not p Values!
Several journals have already reacted to the p value debate. For example, an ASQ essay provides suggestions that not only every editor should read. Another example are the policies published by SMJ: SMJ “will no longer accept papers for publication that report or refer to cut-off levels of statistical significance (p-values)”. Instead, “authors should report either standard errors or exact p-values (without asterisks) or both, and should interpret these values appropriately in the text”. “[T]he discussion could report confidence intervals, explain the standard errors and/or the probability of observing the results in the particular sample, and assess the implications for the research questions or hypotheses tested.” SMJ will also require authors to “explicitly discuss and interpret effect sizes of relevant estimated coefficients”. It might well be that we are currently observing the beginning of the end of null-hypothesis statistical tests. And it might only be a matter of time before other journals, also SCM journals, require authors to remove references to statistical significance and statistical hypothesis testing and, ultimately, to remove p values from their manuscripts.
Five Ways to Fix Statistics in Supply Chain Research
The P value debate has revealed that hypothesis testing is in crisis – also in our discipline! But what should we do now? Nature recently asked influential statisticians to recommend one change to improve science. Here are five answers: (1) Adjust for human cognition: Data analysis is not purely computational – it is a human behavior. So, we need to prevent cognitive mistakes. (2) Abandon statistical significance: Academia seems to like “statistical significance”, but P value thresholds are too often abused to decide between “effect” (favored hypothesis) and “no effect” (null hypothesis). (3) State false-positive risk, too: What matters is the probability that a significant result turns out to be a false positive. (4) Share analysis plans and results: Techniques to avoid false positives are to pre-register analysis plans, and to share all data and results of all analyses as well as any relevant syntax or code. (5) Change norms from within: Funders, journal editors and leading researchers need to act. Otherwise, researchers will continue to re-use outdated methods, and reviewers will demand what has been demanded of them.
Leek, J., McShane, B.B., Gelman, A., Colquhoun, D., Nuijten, M.B. & Goodman, S.N. (2017). Five Ways to Fix Statistics. Nature, 551 (2), 557-559. DOI: 10.1038/d41586-017-07522-z
Deductive, Inductive and Abductive Research in SCM
Like it or not: Our discipline is very much dominated by positivism and the application of the scientific method, which assumes that new knowledge can be created by developing and testing theory or, in other words, by induction or deduction. Another type of inference is abduction. Spens & Kovács (2006) present an overview of the deductive, inductive and abductive research processes.
Spens, K., & Kovács, G. (2006). A Content Analysis of Research Approaches in Logistics Research. International Journal of Physical Distribution & Logistics Management, 36 (5), 374-390 https://doi.org/10.1108/09600030610676259
Supply Chain vs. Supply Chain Competition
Many theory-testing efforts in our field are made by borrowing theories from other fields (e.g., transaction cost economics or resource-based theory), adapting them to a supply chain context and deriving hypotheses that are eventually tested statistically. By doing so, we have reached a lot! But we also need our own theories. For example, several years ago, Lambert & Cooper (2000) noted: “One of the most significant paradigm shifts of modern business management is that individual businesses no longer compete as solely autonomous entities, but rather as supply chains”. So, part of our theoretical toolkit could be a theory of supply chain vs. supply chain competition which could explain how the supply chains of Apple and Samsung interact. However, surprisingly few attempts have been made towards such a theory. This includes a thought piece by Rice & Hoppe (2001) and, more recently, a case study by Antai & Olson (2013). We need to continue this theory-building process.
Rice, J.B. & Hoppe, R.M. (2001). Supply Chain vs. Supply Chain: The Hype & the Reality. Supply Chain Management Review, 5 (5) http: web.mit.edu/supplychain/repository/scvssc.pdf
Antai, I. & Olson, H. (2013). Interaction: A New Focus for Supply Chain vs Supply Chain Competition. International Journal of Physical Distribution & Logistics Management, 43 (7), 511-528 https://doi.org/10.1108/IJPDLM-06-2012-0195
More Complexity = More Disruptions?
Trends in management towards a concentration on core competencies and outsourcing of non-core activities have created complex networks, i.e., global supply chains. At the same time, it has been discussed that this increased complexity has also made companies more vulnerable. An interesting paper, Structural Drivers of Upstream Supply Chain Complexity and the Frequency of Supply Chain Disruptions, co-authored by Bode and Wagner, is currently forthcoming in the Journal of Operations Management. Herein, the authors distinguish between three drivers of upstream supply chain complexity: (1) horizontal complexity (= the number of direct suppliers in a firm’s supply base), (2) vertical complexity (= the number of tiers in the supply chain), and (3) spatial complexity (= the geographical spread of the supply base). Based on survey data, the authors find that all of these three drivers increase the frequency of supply chain disruptions. It is further found that these three variables even amplify each other’s effects in a synergistic fashion.
Bode, C., & Wagner, S. (2015). Structural Drivers of Upstream Supply Chain Complexity and the Frequency of Supply Chain Disruptions. Journal of Operations Management, 36, 215–228 https://doi.org/10.1016/j.jom.2014.12.004
A Trail Guide to Publishing Success
Are you currently conducting conceptual, qualitative, or survey research? Are you also aiming to publish the results in a top journal? Then I have some tips for you that could bring you one step closer to your goal. These tips can be found in a recent JBL editorial: A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Herein, the authors identify and describe agreed-upon basics that can help to “(1) increase consistency in the review process, (2) reduce publication cycles, and (3) begin to roll back the length of articles”. For three types of research (conceptual, qualitative, and survey research), best practices are presented for crafting articles. I especially like a table with warning signs “that authors are wandering down a perilous path”, which can be used as a check list for your own research. These warning signs might also help reviewers to evaluate the quality of a manuscript.
Fawcett, S., Waller, M., Miller, J., Schwieterman, M., Hazen, B., & Overstreet, R. (2014). A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Journal of Business Logistics, 35 (1), 1-16 https://doi.org/10.1111/jbl.12039
Eight Rules for Formal Conceptual Definitions
Theory-building empirical research needs formal conceptual definitions. Particularly, such definitions are necessary conditions for construct validity. But what is a “good” formal conceptual definition? In his seminal JOM paper, A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments, Wacker (2004) presents eight rules for formal conceptual definitions: (1) “Definitions should be formally defined using primitive and derived terms.” (2) “Each concept should be uniquely defined.” (3) “Definitions should include only unambiguous and clear terms.” (4) “Definitions should have as few as possible terms in the conceptual definition to avoid violating the parsimony virtue of ‘good’ theory.” (5) “Definitions should be consistent within the [general academic] field.” (6) “Definitions should not make any term broader.” (7) “New hypotheses cannot be introduced in the definitions.” (8) “Statistical tests for content validity must be performed after the terms are formally defined.” These rules are explained in detail in Wacker’s article. I am convinced that Wacker’s rules lead to better measurement instruments.
Wacker, J.G. (2004). A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments. Journal of Operations Management, 22 (6), 629-650 https://doi.org/10.1016/j.jom.2004.08.002
Theory Testing from a Critical Realist Perspective
In a previous post, it was demonstrated that researchers can play two different roles, as they can either build or test theories. An SMJ article by Miller and Tsang (2011), which is titled Testing management theories: Critical realist philosophy and research methods, focuses on the latter role we can play. The authors claim: “Not only do we have a plurality of theories within management research, there is also no consensus about the criteria for evaluating theories.” Taking a critical realist perspective, they advance practical guidance for evaluating management theories by proposing a four-step approach to theory testing. This approach includes (1) identifying the hypothesized mechanisms, (2) testing for the presence of the mechanisms in the empirical setting, (3) testing isolated causal relations, and (4) testing the theoretical system. The authors underline that “steps 2 and 3 have been neglected for the most part”. In sum, a lot can be learnt about theory testing from this brilliant article.
Miller, K., & Tsang, E. (2011). Testing management theories: Critical realist philosophy and research methods. Strategic Management Journal, 32 (2), 139-158 DOI: 10.1002/smj.868
Theory Building and Theory Testing
Research revolves around theory. Hereby, the role of researchers is twofold: Researchers can either start with real-life observations and produce a set of propositions that summarize a new theory (inductive theory building), e.g., using grounded theory research, or start with an existing theory for formulating hypotheses and use data to test them (deductive theory testing), e.g., using structural equation modeling.
For an extensive investigation of this dual role see Colquitt and Zapata-Phelan (2007).
Colquitt, J. & Zapata-Phelan, C. (2007). Trends in theory building and theory testing: A five-decade study of the Adademy of Management Journal. Academy of Management Journal, 50 (6), 1281-1303 DOI: 10.5465/AMJ.2007.28165855