Just like OM research, SCM research is dominated by three research methodologies: (1) analytical modelling research (optimization, computational, and simulation models etc.), (2) quantitative empirical research (surveys etc.), and (3) case study research. There has been a recent trend towards multi-methodological research that combines different methodologies. A new article by Choi, Cheng and Zhao, titled Multi-Methodological Research in Operations Management, investigates this trend. The authors “present some multi-methodological approaches germane to the pursuit of rigorous and scientific operations management research” and “discuss the strengths and weaknesses of such multi-methodological approaches”. The authors make clear that multi-methodological approaches can make our research “more scientifically sound, rigorous, and practically relevant” and “permit us to explore the problem in ‘multiple dimensions’”. However, such research can also be “risky as it requires high investments of effort and time but the final results might turn out to be not fruitful”. Anyhow, as the authors conclude: “no pain, no gain”!
Choi, T., Cheng, T., & Zhao, X. (2015). Multi-Methodological Research in Operations Management. Production and Operations Management DOI: 10.1111/poms.12534
The AVE–SV comparison (Fornell & Larcker, 1981) is certainly the most common technique for detecting discriminant validity violations on the construct level. An alternative technique, proposed by Henseler et al. (2015), is the heterotrait–monotrait (HTMT) ratio of correlations (see the video below). Based on simulation data, these authors show for variance-based structural equation modeling (SEM), e.g. PLS, that AVE–SV does not reliably detect discriminant validity violations, whereas HTMT identifies a lack of discriminant validity effectively. Results of a related study conducted by Voorhees et al. (2016) suggest that both AVE–SV and HTMT are recommended for detecting discriminant validity violations if covariance-based SEM, e.g. AMOS, is used. They show that the HTMT technique with a cutoff value of 0.85 – abbreviated as HTMT.85 – performs best overall. In other words, HTMT should be used in both variance-based and covariance-based SEM, AVE–SV should be used only in covariance-based SEM. One might be tempted to prefer inferential tests over such heuristics. However, the constrained ϕ approach did not perform well in Voorhees et al.’s study.
Fornell, C., & Larcker, D. (1981). Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research, 18 (1) https://doi.org/10.2307/3151312
Henseler, J., Ringle, C., & Sarstedt, M. (2015). A New Criterion for Assessing Discriminant Validity in Variance-based Structural Equation Modeling. Journal of the Academy of Marketing Science, 43 (1), 115-135 https://doi.org/10.1007/s11747-014-0403-8
Voorhees, C., Brady, M., Calantone, R., & Ramirez, E. (2016). Discriminant Validity Testing in Marketing: An Analysis, Causes for Concern, and Proposed Remedies. Journal of the Academy of Marketing Science, 44 (1), 119-134 https://doi.org/10.1007/s11747-015-0455-4
In their new editorial, the editors of the Journal of Operations Management highlight five important issues, “many of which continue to be reasons for rejections in the manuscript review process”. First, “it is time to take causality seriously”. Particularly, authors have to take steps toward correcting for endogeneity or demonstrating exogeneity. Second, “know which rules are worth following”. For example, the yes–no rule that a measure is reliable if Cronbach’s α exceeds 0.7 is no longer recommended. Third, “always understand the tools you use”. Here, authors of PLS-based manuscripts routinely fail to discuss the weaknesses of the estimator. Fourth, “be cautious with claims about common method bias”. Particularly, ex-post techniques (e.g., Harman, 1967) do not have much practical value (see, however, my post about the CFA marker technique). Finally, “stay current on methodological developments”. For example, Baron & Kenny (1986) are widely used, although updated approaches have been published. It seems that the JOM editors no longer send manuscripts to the review process that ignore these issues.
Guide, V., & Ketokivi, M. (2015). Notes from the Editors: Redefining Some Methodological Criteria for the Journal. Journal of Operations Management, 37 https://doi.org/10.1016/S0272-6963(15)00056-X
Are you currently conducting conceptual, qualitative, or survey research? Are you also aiming to publish the results in a top journal? Then I have some tips for you that could bring you one step closer to your goal. These tips can be found in a recent JBL editorial: A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Herein, the authors identify and describe agreed-upon basics that can help to “(1) increase consistency in the review process, (2) reduce publication cycles, and (3) begin to roll back the length of articles”. For three types of research (conceptual, qualitative, and survey research), best practices are presented for crafting articles. I especially like a table with warning signs “that authors are wandering down a perilous path”, which can be used as a check list for your own research. These warning signs might also help reviewers to evaluate the quality of a manuscript.
Fawcett, S., Waller, M., Miller, J., Schwieterman, M., Hazen, B., & Overstreet, R. (2014). A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Journal of Business Logistics, 35 (1), 1-16 https://doi.org/10.1111/jbl.12039
Theory-building empirical research needs formal conceptual definitions. Particularly, such definitions are necessary conditions for construct validity. But what is a “good” formal conceptual definition? In his seminal JOM paper, A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments, Wacker (2004) presents eight rules for formal conceptual definitions: (1) “Definitions should be formally defined using primitive and derived terms.” (2) “Each concept should be uniquely defined.” (3) “Definitions should include only unambiguous and clear terms.” (4) “Definitions should have as few as possible terms in the conceptual definition to avoid violating the parsimony virtue of ‘good’ theory.” (5) “Definitions should be consistent within the [general academic] field.” (6) “Definitions should not make any term broader.” (7) “New hypotheses cannot be introduced in the definitions.” (8) “Statistical tests for content validity must be performed after the terms are formally defined.” These rules are explained in detail in Wacker’s article. I am convinced that Wacker’s rules lead to better measurement instruments.
Wacker, J.G. (2004). A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments. Journal of Operations Management, 22 (6), 629-650 https://doi.org/10.1016/j.jom.2004.08.002
A few months ago, I presented the Handbook of Management Scales, an online collection of previously used multi-item measurement scales (see post). Quite similar, the Journal of Business Logistics has now published a compendium of multi-item scales utilized in logistics research – a good complement to my collection. The authors, Keller et al. (2013), found that not less than 980 scales were used in four journals related to logistics (IJLM, IJPDLM, JBL, TJ) between 2001 and 2010. It is the merit of the authors to identify and document these scales in an electronic Appendix, which contains “a categorical listing of multi-item scales and the available information concerning the scales’ validity and reliability”. The Appendix is available as a Word document. One can only guess how tedious it was to prepare the compendium. In addition, the authors offer a comparison of scales categories, a comparison with previous results and a comparison between JBL and the Journal of Marketing.
Keller, S.B., Hochard, K., Rudolph, T., & Boden, M. (2013). A Compendium of Multi-Item Scales Utilized in Logistics Research (2001–10): Progress Achieved Since Publication of the 1973–2000 Compendium. Journal of Business Logistics, 34 (2) DOI: 10.1111/jbl.12011
Nobel laureates rarely publish articles in journals within the supply chain arena. Vernon L. Smith was awarded the 2002 Nobel Memorial Prize in Economic Sciences “for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms”. In its recent issue, the Journal of Business Logistics had the great privilege to publish an article by Cary Deck and him, in which these ideas were applied to our field: Using Laboratory Experiments in Logistics and Supply Chain Research. It goes without saying that this article is certainly a must-read for SCM researchers, particularly, as the use of laboratory experiments has attracted some attention in our field. In their article, Deck and Smith highlight “several examples where experiments have been used to study issues relevant to logistics and supply chain management” and identify “several additional areas where laboratory experiments could be informative”.
Deck, C. & Smith, V. (2013). Using Laboratory Experiments in Logistics and Supply Chain Research. Journal of Business Logistics, 34 (1), 6-14 DOI: 10.1111/jbl.12006
Good research stands out through relevant research questions which are answered applying rigorous research methods. Researchers routinely signal methodological rigor in a detailed methods section and reviewers take great care to check whether all methodological steps are applied properly. Besides rigor, it is necessary to ensure that research is relevant in terms of both theoretical and practical contribution. The former is usually reflected in a comprehensive theory section. The latter, however, is often just being skipped. Reviewers are rarely practitioners! It has, indeed, been demonstrated that practitioners often disagree that operations management research papers are useful to their practice (de-Margerie and Jiang, 2011). Salvador (2011) suggests “to get in contact with practitioners and to try to understand how they react to the central theoretical ideas proposed”. Moreover, I believe that reviewers should routinely ask for at least one paragraph on how the researcher has ensured practical relevance, e.g. by involving a practitioner panel.
de-Margerie, V., & Jiang, B. (2011). How relevant is OM research to managerial practice? An empirical study of top executives’ perceptions. International Journal of Operations & Production Management, 31 (2), 124-147 DOI: 10.1108/01443571111104737
Salvador, F. (2011). On the importance of good questions and empirically grounded theorizing. Journal of Supply Chain Management, 47 (4), 21-22 DOI: 10.1111/j.1745-493X.2011.03248.x
In a previous post, it was demonstrated that researchers can play two different roles, as they can either build or test theories. An SMJ article by Miller and Tsang (2011), which is titled Testing management theories: Critical realist philosophy and research methods, focuses on the latter role we can play. The authors claim: “Not only do we have a plurality of theories within management research, there is also no consensus about the criteria for evaluating theories.” Taking a critical realist perspective, they advance practical guidance for evaluating management theories by proposing a four-step approach to theory testing. This approach includes (1) identifying the hypothesized mechanisms, (2) testing for the presence of the mechanisms in the empirical setting, (3) testing isolated causal relations, and (4) testing the theoretical system. The authors underline that “steps 2 and 3 have been neglected for the most part”. In sum, a lot can be learnt about theory testing from this brilliant article.
Miller, K., & Tsang, E. (2011). Testing management theories: Critical realist philosophy and research methods. Strategic Management Journal, 32 (2), 139-158 DOI: 10.1002/smj.868
From time to time, I present insightful methodological articles on this blog. Today’s post is dedicated to an article by Edwards (2011): The fallacy of formative measurement (ORM, Vol. 14, No. 2). The article critically compares reflective and formative measurement, i.e., two optional directions of the relationship between constructs and measures in empirical research. Reflective measurement treats “constructs as causes of measures, such that measures are reflective manifestations of underlying constructs”, whereas formative measurement specifies “measures as causes of constructs, such that measures form or induce an underlying latent variable”. The article “compares reflective and formative measurement on the basis of dimensionality, internal consistency, identification, measurement error, construct validity, and causality”. It turns out that Edwards takes a negative stance towards formative measurement. Particularly, Edwards argues that “formative measurement is not a viable alternative to reflective measurement”. Edwards’s article was among the best paper winners of Organizational Research Methods in 2011.
Edwards, J.R. (2011). The Fallacy of Formative Measurement. Organizational Research Methods, 14 (2), 370-388 https://10.1177/1094428110378369