An appropriate measurement of journal quality is essential in accreditation, funding allocation, hiring, merit pay, tenure, and promotion decisions in academics. The current best practice to rate journal quality is to combine journal bibliometrics with expert assessment. For example, the Association of Business Schools (ABS) Journal Guide generated by this method is widely used by many business schools. However, different journal bibliometrics calculated in the citation network sometimes provide inconsistent ranking, and it is hard for domain experts to utilize the conflicting information. Therefore, given a journal, if the ABS Scientific Committee members are not familiar with it and different journal bibliometrics provide conflicting information, the given journal is hard to be rated and will not be included in the ABS journal list. In order to solve the above issue and maintain a comprehensive list of journals in the management information systems (MIS) field, this paper proposes a data‐driven method to predict the ABS rating based on six popular bibliometrics for any given MIS journal. To the best of our knowledge, our method is the first work on this type to predict ABS ratings, which can serve as a more reliable rating reference and is much easier to be used to generate the rating for a comprehensive list of journals in the MIS field. In this paper, comprehensive experiments are conducted to evaluate the rating performance of our method from four different perspectives, including new journals, top journals, and interdisciplinary journals, and identifying overrated and underrated journals by ABS. Experiment results show our method can provide very reliable estimated ABS ratings for most MIS journals with few exceptions. Since our method is not perfect, expert knowledge is encouraged to be included to correct our estimated ABS ratings. However, such correction must be conducted under the following two constraints. First, domain experts must have sufficient evidences to do the correction. Second, correction can be adding or subtracting 1, but not beyond 1.
INTRODUCTION AND RELATED WORK
High‐quality publications can advance knowledge and have academic impacts. Therefore, they are evaluated and acknowledged in many scenarios, such as promotion and tenure for scholars and accreditation for academic institutes. However, it is relatively hard and time consuming to evaluate the quality and impact for each individual publication. In practice, it is more convenient to utilize the quality of the journal, where the related paper is published, to indicate the quality of the paper. Although not every paper in a high‐quality journal is highly influential, it is a reliable indicator for the paper quality because most papers in it must be highly influential. Otherwise, the related journal cannot be a high‐quality journal. Therefore, the problem of measuring paper quality can be considered as the problem of measuring journal quality. However, the measurement of journal quality is naturally contentious as it involves many different dimensions. For example, if Journal A focuses more on practical applications whereas Journal B focuses more on advancing theories, it is hard to conclude that Journal A is better than B or vice versa. In addition, due to the interdisciplinary nature of the management information systems (MIS) field, to create a measurement that is both accurate and comprehensive is even more challenging. Since more and more accreditation agencies, funding agencies, and universities realize the importance for interdisciplinary collaboration to solve more complex problems that cannot be addressed by single disciplines alone (Reich & Reich, 2006), an up‐to‐date, accurate, and comprehensive measurement system is critically important especially for the new and high‐quality journals aiming at emerging and interdisciplinary topics. Otherwise, a problematic measurement system will force researchers submitting their high‐quality papers to a relatively less‐fitted journal and prevent them from doing interdisciplinary research.