Tag Archive | Methodology

Redefining Some Methodological Criteria for Empirical Research

In their new editorial, the editors of the Journal of Operations Management highlight five important issues, “many of which continue to be reasons for rejections in the manuscript review process”. First, “it is time to take causality seriously”. Particularly, authors have to take steps toward correcting for endogeneity or demonstrating exogeneity. Second, “know which rules are worth following”. For example, the yes–no rule that a measure is reliable if Cronbach’s α exceeds 0.7 is no longer recommended. Third, “always understand the tools you use”. Here, authors of PLS-based manuscripts routinely fail to discuss the weaknesses of the estimator. Fourth, “be cautious with claims about common method bias”. Particularly, ex-post techniques (e.g., Harman, 1967) do not have much practical value (see, however, my post about the CFA marker technique). Finally, “stay current on methodological developments”. For example, Baron & Kenny (1986) are widely used, although updated approaches have been published. It seems that the JOM editors no longer send manuscripts to the review process that ignore these issues.

Guide, V., & Ketokivi, M. (2015). Notes from the Editors: Redefining Some Methodological Criteria for the Journal. Journal of Operations Management, 37 https://doi.org/10.1016/S0272-6963(15)00056-X

A Trail Guide to Publishing Success

Are you currently conducting conceptual, qualitative, or survey research? Are you also aiming to publish the results in a top journal? Then I have some tips for you that could bring you one step closer to your goal. These tips can be found in a recent JBL editorial: A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Herein, the authors identify and describe agreed-upon basics that can help to “(1) increase consistency in the review process, (2) reduce publication cycles, and (3) begin to roll back the length of articles”. For three types of research (conceptual, qualitative, and survey research), best practices are presented for crafting articles. I especially like a table with warning signs “that authors are wandering down a perilous path”, which can be used as a check list for your own research. These warning signs might also help reviewers to evaluate the quality of a manuscript.

Fawcett, S., Waller, M., Miller, J., Schwieterman, M., Hazen, B., & Overstreet, R. (2014). A Trail Guide to Publishing Success: Tips on Writing Influential Conceptual, Qualitative, and Survey Research. Journal of Business Logistics, 35 (1), 1-16 https://doi.org/10.1111/jbl.12039

Eight Rules for Formal Conceptual Definitions

Theory-building empirical research needs formal conceptual definitions. Particularly, such definitions are necessary conditions for construct validity. But what is a “good” formal conceptual definition? In his seminal JOM paper, A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments, Wacker (2004) presents eight rules for formal conceptual definitions: (1) “Definitions should be formally defined using primitive and derived terms.” (2) “Each concept should be uniquely defined.” (3) “Definitions should include only unambiguous and clear terms.” (4) “Definitions should have as few as possible terms in the conceptual definition to avoid violating the parsimony virtue of ‘good’ theory.” (5) “Definitions should be consistent within the [general academic] field.” (6) “Definitions should not make any term broader.” (7) “New hypotheses cannot be introduced in the definitions.” (8) “Statistical tests for content validity must be performed after the terms are formally defined.” These rules are explained in detail in Wacker’s article. I am convinced that Wacker’s rules lead to better measurement instruments.

Wacker, J.G. (2004). A Theory of Formal Conceptual Definitions: Developing Theory-building Measurement Instruments. Journal of Operations Management, 22 (6), 629-650 https://doi.org/10.1016/j.jom.2004.08.002

Multi-item Scales Utilized in Logistics Research

A few months ago, I presented the Handbook of Management Scales, an online collection of previously used multi-item measurement scales (see post). Quite similar, the Journal of Business Logistics has now published a compendium of multi-item scales utilized in logistics research – a good complement to my collection. The authors, Keller et al. (2013), found that not less than 980 scales were used in four journals related to logistics (IJLM, IJPDLM, JBL, TJ) between 2001 and 2010. It is the merit of the authors to identify and document these scales in an electronic Appendix, which contains “a categorical listing of multi-item scales and the available information concerning the scales’ validity and reliability”. The Appendix is available as a Word document. One can only guess how tedious it was to prepare the compendium. In addition, the authors offer a comparison of scales categories, a comparison with previous results and a comparison between JBL and the Journal of Marketing.

Keller, S.B., Hochard, K., Rudolph, T., & Boden, M. (2013). A Compendium of Multi-Item Scales Utilized in Logistics Research (2001–10): Progress Achieved Since Publication of the 1973–2000 Compendium. Journal of Business Logistics, 34 (2) DOI: 10.1111/jbl.12011

Laboratory Experiments in Supply Chain Research

Nobel laureates rarely publish articles in journals within the supply chain arena. Vernon L. Smith was awarded the 2002 Nobel Memorial Prize in Economic Sciences “for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms”. In its recent issue, the Journal of Business Logistics had the great privilege to publish an article by Cary Deck and him, in which these ideas were applied to our field: Using Laboratory Experiments in Logistics and Supply Chain Research. It goes without saying that this article is certainly a must-read for SCM researchers, particularly, as the use of laboratory experiments has attracted some attention in our field. In their article, Deck and Smith highlight “several examples where experiments have been used to study issues relevant to logistics and supply chain management” and identify “several additional areas where laboratory experiments could be informative”.

Deck, C. & Smith, V. (2013). Using Laboratory Experiments in Logistics and Supply Chain Research. Journal of Business Logistics, 34 (1), 6-14 DOI: 10.1111/jbl.12006

Rigor and Relevance in SCM Research

Good research stands out through relevant research questions which are answered applying rigorous research methods. Researchers routinely signal methodological rigor in a detailed methods section and reviewers take great care to check whether all methodological steps are applied properly. Besides rigor, it is necessary to ensure that research is relevant in terms of both theoretical and practical contribution. The former is usually reflected in a comprehensive theory section. The latter, however, is often just being skipped. Reviewers are rarely practitioners! It has, indeed, been demonstrated that practitioners often disagree that operations management research papers are useful to their practice (de-Margerie and Jiang, 2011). Salvador (2011) suggests “to get in contact with practitioners and to try to understand how they react to the central theoretical ideas proposed”. Moreover, I believe that reviewers should routinely ask for at least one paragraph on how the researcher has ensured practical relevance, e.g. by involving a practitioner panel.

de-Margerie, V., & Jiang, B. (2011). How relevant is OM research to managerial practice? An empirical study of top executives’ perceptions. International Journal of Operations & Production Management, 31 (2), 124-147 DOI: 10.1108/01443571111104737

Salvador, F. (2011). On the importance of good questions and empirically grounded theorizing. Journal of Supply Chain Management, 47 (4), 21-22 DOI: 10.1111/j.1745-493X.2011.03248.x

Theory Testing from a Critical Realist Perspective

In a previous post, it was demonstrated that researchers can play two different roles, as they can either build or test theories. An SMJ article by Miller and Tsang (2011), which is titled Testing management theories: Critical realist philosophy and research methods, focuses on the latter role we can play. The authors claim: “Not only do we have a plurality of theories within management research, there is also no consensus about the criteria for evaluating theories.” Taking a critical realist perspective, they advance practical guidance for evaluating management theories by proposing a four-step approach to theory testing. This approach includes (1) identifying the hypothesized mechanisms, (2) testing for the presence of the mechanisms in the empirical setting, (3) testing isolated causal relations, and (4) testing the theoretical system. The authors underline that “steps 2 and 3 have been neglected for the most part”. In sum, a lot can be learnt about theory testing from this brilliant article.

Miller, K., & Tsang, E. (2011). Testing management theories: Critical realist philosophy and research methods. Strategic Management Journal, 32 (2), 139-158 DOI: 10.1002/smj.868

Reflective and Formative Measurement Models

From time to time, I present insightful methodological articles on this blog. Today’s post is dedicated to an article by Edwards (2011): The fallacy of formative measurement (ORM, Vol. 14, No. 2). The article critically compares reflective and formative measurement, i.e., two optional directions of the relationship between constructs and measures in empirical research. Reflective measurement treats “constructs as causes of measures, such that measures are reflective manifestations of underlying constructs”, whereas formative measurement specifies “measures as causes of constructs, such that measures form or induce an underlying latent variable”. The article “compares reflective and formative measurement on the basis of dimensionality, internal consistency, identification, measurement error, construct validity, and causality”. It turns out that Edwards takes a negative stance towards formative measurement. Particularly, Edwards argues that “formative measurement is not a viable alternative to reflective measurement”. Edwards’s article was among the best paper winners of Organizational Research Methods in 2011.

Edwards, J.R. (2011). The Fallacy of Formative Measurement. Organizational Research Methods, 14 (2), 370-388 https://10.1177/1094428110378369

Conducting and Presenting Grounded Theory Research

Qualitative research can be conducted to build theory from field data. The Discovery of Grounded Theory by Glaser and Strauss (1967) remains the fundamental handbook of this approach. SCM journals have recently seen a series of articles advocating for the use of grounded theory, e.g., Mello and Flint (2009, JBL) and Kaufmann and Denk (2011, JSCM). Therefore, I would like to draw attention to three helpful papers. First, Suddaby (2006), offers “a reasonable assessment of common errors researchers make in conducting and presenting grounded theory research”. Herein, he discusses six common misconceptions of what grounded theory is not. Second, O’Reilly et al. (2012) “demystify the key tenets of [grounded theory]”, “discuss the problematic impacts of adopting an a la carte approach to [grounded theory]”, “draw attention to [grounded theory] as a rigorous method”, and, again, “advocate for the increased use of [grounded theory]”. Third, Manuj and Pohlen (2012) “provide a framework to assist reviewers in evaluating grounded theory research”.

Social Science Research: Principles, Methods, and Practices

I like open access. The USF Tampa Library hosts a collection of open access textbooks. One of them, which might be useful for many SCM researchers, has been published by Anol Bhattacherjee, a professor of information systems. It is titled Social Science Research: Principles, Methods, and Practices. The book, which is succinct and compact, is about the entire research process and it is designed “to introduce doctoral and graduate students to the process of scientific research”. The initial chapters 1 to 4 give an introduction to research. This includes topics such as “thinking like a researcher” and “theories in scientific research”. The chapters 5 to 8 are about the basics of empirical research (i.e, research design, construct measurement, scale reliability/validity, and sampling). The chapters 9 to 12 are concerned with data collection (i.e., survey research, experimental research, case research, and interpretive research). Both qualitative and quantitative data analysis is explained in the chapters 13 to 15. The last chapter is about research ethics.