Abstract
This study examines the effectiveness of the method of using the traditional method of counting articles in ranked venues (CARV) to evaluate scholarly output. CARV has been criticized for its lack of theoretical basis and performative effects, but it has never been empirically studied to determine its effectiveness in correctly classifying scholarly output as to its quality. This study fills that gap by testing a set of six published journal lists to examine their ability to discern the quality of papers. We examine the consistency of quality across journals in each stratum, the ability of the lists to discriminate levels of quality in its strata and the ability of the method to correctly classify papers in strata based on quality. We find the journal lists substantially misclassify articles as to quality and are therefore problematic as evaluative mechanisms for scholarly ability.
Original language | English |
---|---|
Pages (from-to) | 622-635 |
Number of pages | 14 |
Journal | Journal of Computer Information Systems |
Volume | 64 |
Issue number | 5 |
DOIs | |
State | Published - 2024 |
Externally published | Yes |
Keywords
- CARV
- Journal ranking effectiveness
- evaluation of research
- quality