Archive | Methodology RSS for this section

Deductive, Inductive and Abductive Research in SCM

Deductive, inductive, abductive research processLike it or not: Our discipline is very much dominated by positivism and the application of the scientific method, which assumes that new knowledge can be created by developing and testing theory or, in other words, by induction or deduction. Another type of inference is abduction. Spens & Kovács (2006) present an overview of the deductive, inductive and abductive research processes.

Spens, K., & Kovács, G. (2006). A Content Analysis of Research Approaches in Logistics Research. International Journal of Physical Distribution & Logistics Management, 36 (5), 374-390 https://doi.org/10.1108/09600030610676259

The p-value Debate Has Reached SCM Research

We should not ignore that researchers – in general but also in supply chain management – are not always as properly trained to perform data analysis as they should be. A highly visible discussion is currently going on regarding the prevalent misuses of p-values. For example, too often research has been considered as “good” research, just because the p-value passed a specific threshold – also in the SCM discipline. But the p-value is not an interpretation, it rather needs interpretation! Some statisticians now even prefer to replace p-values with other approaches and some journals have decided to ban p-values. Based on this ongoing discussion, the influential American Statistical Association has now issued a Statement on Statistical Significance and p-values. It contains six principles underlying the proper use and interpretation of the p-value. As a discipline, we should take these principles seriously: in our own research, but also when we review the manuscripts of our colleagues.

Wasserstein, R., & Lazar, N. (2016). The ASA’s Statement on p-values: Context, Process, and Purpose. The American Statistician https://doi.org/10.1080/00031305.2016.1154108

Creating Better Concept Definitions in SCM

I believe we all have already experienced this: The same concept can sometimes be defined in very different ways by different authors. Conceptual clarity would certainly be great, but how can we achieve it? Think, for example, about concepts such as trust, integration or dependence. So, what do we really mean when we are talking about them? In their new article, Recommendations for Creating Better Concept Definitions in the Organizational, Behavioral, and Social Sciences, Podsakoff, MacKenzie & Podsakoff (2016) present four stages for developing good conceptual definitions: Researchers need to (1) “identify potential attributes of the concept and/or collect a representative set of definitions”; (2) “organize the potential attributes by theme and identify any necessary and sufficient ones”; (3) “develop a preliminary definition of the concept”; and (4) “[refine] the conceptual definition of the concept”. For each of these stages, the authors provide comprehensive guidelines and examples which can help supply chain researchers to improve the definitions of the concepts we use.

Podsakoff, P., MacKenzie, S., & Podsakoff, N. (2016). Recommendations for Creating Better Concept Definitions in the Organizational, Behavioral, and Social Sciences. Organizational Research Methods, 19 (2), 159-203 https://doi.org/10.1177/1094428115624965

Multi-Methodological Research in Supply Chain Management

Just like OM research, SCM research is dominated by three research methodologies: (1) analytical modelling research (optimization, computational, and simulation models etc.), (2) quantitative empirical research (surveys etc.), and (3) case study research. There has been a recent trend towards multi-methodological research that combines different methodologies. A new article by Choi, Cheng and Zhao, titled Multi-Methodological Research in Operations Management, investigates this trend. The authors “present some multi-methodological approaches germane to the pursuit of rigorous and scientific operations management research” and “discuss the strengths and weaknesses of such multi-methodological approaches”. The authors make clear that multi-methodological approaches can make our research “more scientifically sound, rigorous, and practically relevant” and “permit us to explore the problem in ‘multiple dimensions’”. However, such research can also be “risky as it requires high investments of effort and time but the final results might turn out to be not fruitful”. Anyhow, as the authors conclude: “no pain, no gain”!

Choi, T., Cheng, T., & Zhao, X. (2015). Multi-Methodological Research in Operations Management. Production and Operations Management DOI: 10.1111/poms.12534

Discriminant Validity – An Update

The AVE–SV comparison (Fornell & Larcker, 1981) is certainly the most common technique for detecting discriminant validity violations on the construct level. An alternative technique, proposed by Henseler et al. (2015), is the heterotrait–monotrait (HTMT) ratio of correlations (see the video below). Based on simulation data, these authors show for variance-based structural equation modeling (SEM), e.g. PLS, that AVE–SV does not reliably detect discriminant validity violations, whereas HTMT identifies a lack of discriminant validity effectively. Results of a related study conducted by Voorhees et al. (2016) suggest that both AVE–SV and HTMT are recommended for detecting discriminant validity violations if covariance-based SEM, e.g. AMOS, is used. They show that the HTMT technique with a cutoff value of 0.85 – abbreviated as HTMT.85 – performs best overall. In other words, HTMT should be used in both variance-based and covariance-based SEM, AVE–SV should be used only in covariance-based SEM. One might be tempted to prefer inferential tests over such heuristics. However, the constrained ϕ approach did not perform well in Voorhees et al.’s study.

Fornell, C., & Larcker, D. (1981). Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research, 18 (1) https://doi.org/10.2307/3151312

Henseler, J., Ringle, C., & Sarstedt, M. (2015). A New Criterion for Assessing Discriminant Validity in Variance-based Structural Equation Modeling. Journal of the Academy of Marketing Science, 43 (1), 115-135 https://doi.org/10.1007/s11747-014-0403-8

Voorhees, C., Brady, M., Calantone, R., & Ramirez, E. (2016). Discriminant Validity Testing in Marketing: An Analysis, Causes for Concern, and Proposed Remedies. Journal of the Academy of Marketing Science, 44 (1), 119-134 https://doi.org/10.1007/s11747-015-0455-4

Redefining Some Methodological Criteria for Empirical Research

In their new editorial, the editors of the Journal of Operations Management highlight five important issues, “many of which continue to be reasons for rejections in the manuscript review process”. First, “it is time to take causality seriously”. Particularly, authors have to take steps toward correcting for endogeneity or demonstrating exogeneity. Second, “know which rules are worth following”. For example, the yes–no rule that a measure is reliable if Cronbach’s α exceeds 0.7 is no longer recommended. Third, “always understand the tools you use”. Here, authors of PLS-based manuscripts routinely fail to discuss the weaknesses of the estimator. Fourth, “be cautious with claims about common method bias”. Particularly, ex-post techniques (e.g., Harman, 1967) do not have much practical value (see, however, my post about the CFA marker technique). Finally, “stay current on methodological developments”. For example, Baron & Kenny (1986) are widely used, although updated approaches have been published. It seems that the JOM editors no longer send manuscripts to the review process that ignore these issues.

Guide, V., & Ketokivi, M. (2015). Notes from the Editors: Redefining Some Methodological Criteria for the Journal. Journal of Operations Management, 37 https://doi.org/10.1016/S0272-6963(15)00056-X

A Trail Guide to Publishing Success

Are you currently conducting conceptual, qualitative, or survey research? Are you also aiming to publish the results in a top journal? Then I have some tips for you that could bring you one step closer to your goal. These tips can be found in a recent JBL editorial: A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Herein, the authors identify and describe agreed-upon basics that can help to “(1) increase consistency in the review process, (2) reduce publication cycles, and (3) begin to roll back the length of articles”. For three types of research (conceptual, qualitative, and survey research), best practices are presented for crafting articles. I especially like a table with warning signs “that authors are wandering down a perilous path”, which can be used as a check list for your own research. These warning signs might also help reviewers to evaluate the quality of a manuscript.

Fawcett, S., Waller, M., Miller, J., Schwieterman, M., Hazen, B., & Overstreet, R. (2014). A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Journal of Business Logistics, 35 (1), 1-16 https://doi.org/10.1111/jbl.12039

Eight Rules for Formal Conceptual Definitions

Theory-building empirical research needs formal conceptual definitions. Particularly, such definitions are necessary conditions for construct validity. But what is a “good” formal conceptual definition? In his seminal JOM paper, A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments, Wacker (2004) presents eight rules for formal conceptual definitions: (1) “Definitions should be formally defined using primitive and derived terms.” (2) “Each concept should be uniquely defined.” (3) “Definitions should include only unambiguous and clear terms.” (4) “Definitions should have as few as possible terms in the conceptual definition to avoid violating the parsimony virtue of ‘good’ theory.” (5) “Definitions should be consistent within the [general academic] field.” (6) “Definitions should not make any term broader.” (7) “New hypotheses cannot be introduced in the definitions.” (8) “Statistical tests for content validity must be performed after the terms are formally defined.” These rules are explained in detail in Wacker’s article. I am convinced that Wacker’s rules lead to better measurement instruments.

Wacker, J.G. (2004). A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments. Journal of Operations Management, 22 (6), 629-650 https://doi.org/10.1016/j.jom.2004.08.002

Multi-item Scales Utilized in Logistics Research

A few months ago, I presented the Handbook of Management Scales, an online collection of previously used multi-item measurement scales (see post). Quite similar, the Journal of Business Logistics has now published a compendium of multi-item scales utilized in logistics research – a good complement to my collection. The authors, Keller et al. (2013), found that not less than 980 scales were used in four journals related to logistics (IJLM, IJPDLM, JBL, TJ) between 2001 and 2010. It is the merit of the authors to identify and document these scales in an electronic Appendix, which contains “a categorical listing of multi-item scales and the available information concerning the scales’ validity and reliability”. The Appendix is available as a Word document. One can only guess how tedious it was to prepare the compendium. In addition, the authors offer a comparison of scales categories, a comparison with previous results and a comparison between JBL and the Journal of Marketing.

Keller, S.B., Hochard, K., Rudolph, T., & Boden, M. (2013). A Compendium of Multi-Item Scales Utilized in Logistics Research (2001–10): Progress Achieved Since Publication of the 1973–2000 Compendium. Journal of Business Logistics, 34 (2) DOI: 10.1111/jbl.12011

Laboratory Experiments in Supply Chain Research

Nobel laureates rarely publish articles in journals within the supply chain arena. Vernon L. Smith was awarded the 2002 Nobel Memorial Prize in Economic Sciences “for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms”. In its recent issue, the Journal of Business Logistics had the great privilege to publish an article by Cary Deck and him, in which these ideas were applied to our field: Using Laboratory Experiments in Logistics and Supply Chain Research. It goes without saying that this article is certainly a must-read for SCM researchers, particularly, as the use of laboratory experiments has attracted some attention in our field. In their article, Deck and Smith highlight “several examples where experiments have been used to study issues relevant to logistics and supply chain management” and identify “several additional areas where laboratory experiments could be informative”.

Deck, C. & Smith, V. (2013). Using Laboratory Experiments in Logistics and Supply Chain Research. Journal of Business Logistics, 34 (1), 6-14 DOI: 10.1111/jbl.12006