Five Ways to Fix Statistics in Supply Chain Research

The P value debate has revealed that hypothesis testing is in crisis – also in our discipline! But what should we do now? Nature recently asked influential statisticians to recommend one change to improve science. Here are five answers: (1) Adjust for human cognition: Data analysis is not purely computational – it is a human behavior. So, we need to prevent cognitive mistakes. (2) Abandon statistical significance: Academia seems to like “statistical significance”, but P value thresholds are too often abused to decide between “effect” (favored hypothesis) and “no effect” (null hypothesis). (3) State false-positive risk, too: What matters is the probability that a significant result turns out to be a false positive. (4) Share analysis plans and results: Techniques to avoid false positives are to pre-register analysis plans, and to share all data and results of all analyses as well as any relevant syntax or code. (5) Change norms from within: Funders, journal editors and leading researchers need to act. Otherwise, researchers will continue to re-use outdated methods, and reviewers will demand what has been demanded of them.

Leek, J., McShane, B.B., Gelman, A., Colquhoun, D., Nuijten, M.B. & Goodman, S.N. (2017). Five Ways to Fix Statistics. Nature, 551 (2), 557-559. DOI: 10.1038/d41586-017-07522-z

Tags: , , , ,

About Andreas Wieland

Andreas Wieland is an Associate Professor of Supply Chain Management at Copenhagen Business School. His current research interests include resilient and socially responsible supply chains.

2 responses to “Five Ways to Fix Statistics in Supply Chain Research”

  1. Dr. Roberto A. Martins says :

    In that discussion, we should consider both the debate practical significance vs. statistical significance and how capable we are of precisely measuring the SCM constructs e.g. integration.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.