Can We Trust Journal Rankings to Assess Article Quality?

Michael J. Cuellar, Duane P. Truex, Hirotoshi Takeda

Research output: Contribution to journalArticlepeer-review

Abstract

This study examines the effectiveness of the method of using publication in ranked journals to evaluate the quality of scholarly output in the Information Systems field. Counting publications in ranked journals is the traditional method employed to evaluate scholarly output. Counting publications has been criticized for its lack of theoretical basis and performative effects but it has never been empirically studied to determine its effectiveness in correctly classifying scholarly output as to its quality. This study fills that gap by testing a set of four published journal lists to examine their ability to discern the quality of papers. We find that the journal lists substantially misclassify articles as to quality and are therefore problematic as evaluative mechanisms for scholarly ability. This study argues that other methods such as evaluation of a scholar’s capital (Cuellar, Takeda, Vidgen, & Truex III, 2016) should be pursued.

Original languageAmerican English
JournalAmericas’ Conference on Information Systems Proceedings
StatePublished - Nov 8 2016

Disciplines

  • Business Administration, Management, and Operations
  • Management Information Systems

Fingerprint

Dive into the research topics of 'Can We Trust Journal Rankings to Assess Article Quality?'. Together they form a unique fingerprint.

Cite this