Abstract
1. Introduction
2. Related works review
3. Example-dependent cost-sensitive AdaBoost algorithms
4. Experiments
5. Discussion
6. Conclusion and future works
Conflicts of interest
CRediT authorship contribution statement
References
Abstract
Intelligent computer systems aim to help humans in making decisions. Many practical decision-making problems are classification problems in their nature, but standard classification algorithms often not applicable since they assume balanced distribution of classes and constant misclassification costs. From this point of view, algorithms that consider the cost of decisions are essential since they are more consistent with the requirements of real life. These algorithms generate decisions that directly optimize parameters valuable for business, for example, the costs savings. But despite on practical value of cost-sensitive algorithms, the little number of works study this problem concentrating mainly on the case when the cost of a classifier error is constant and does not depend on a specific example. However, many real-world classification tasks are example-dependent cost-sensitive (ECS), where the costs of misclassification vary between examples and not only within classes. Existing methods of ECS learning include just modifications of the simplest models of machine learning (naive Bayes, logistic regression, decision tree). These models produce promising results, but there is a need for further improvement in performance that can be achieved by using gradient-based ensemble methods. To break this gap, we present the ECS generalization of AdaBoost. We study three models which differ by the ways to introduce cost into the loss function: inside the exponent, outside the exponent, and both inside and outside the exponent. The results of the experiments on three synthetic and two real datasets (bank marketing and insurance fraud) show that example-dependent cost-sensitive modifications of AdaBoost outperform other known models. Empirical results also show that critical factors influencing the choice of the model are not only the distribution of features, which is typical for cost-insensitive and class-dependent cost-sensitive problems but also the distribution of costs. Next, since the outputs of AdaBoost are not well calibrated posterior probabilities, we check three approaches to calibration of classifier scores: Platt scaling, isotonic regression, and ROC modification. The results show that calibration not only significantly improves the performance of specific ECS models but allows making better capabilities of original AdaBoost. Obtained results provide new insight regarding the behavior of the cost-sensitive model from a theoretical point of view and prove that the presented approach can significantly improve the practical design of intelligent systems.
Introduction
In practice, decision making often comes down to the problem of classification. The responsible person must determine to what known class the particular object belongs, and the assigned class label defines possible scenarios of actions. For example, the loan manager first analyzes the data of the borrower to determine the level of risk (for example, low, medium or high) and then selects the terms of the loan agreement based of risk level assigned. Similar tasks arise in all other areas of human activity. Since the number of factors influencing a decision can be huge and their relationships are complicated, computer methods are widely used to solve the classification problem. But practitioners often face problems that cannot be solved by standard algorithms, since these methods assume a balanced class distribution and equal misclassification costs (He & Garcia, 2009). In real life, the most typical situation is when the number of examples of one class is much smaller (10 or more times) than the number of instances of another. Moreover, the minor class includes those objects whose identification is of particular interest (Liu & Zhou, 2006; Zadrozny & Elkan, 2001a): insurance fraudsters (Abdallah, Maarof, & Zainal, 2016), dishonest borrowers (Abellan & Castellano, 2017), fraudulent credit card transactions (Sahin, Bulkan, & Duman, 2013), patients with a specific diagnosis (Sun, Kamel, Wong, & Wang, 2007), etc. Besides, it is evident that potential financial losses depend on the type of classifier error. Approval of a loan to a fraudster leads to higher losses than the denial to a bona fide borrower.