ارتقا تطبیقی حساس به هزینه
ترجمه نشده

ارتقا تطبیقی حساس به هزینه

عنوان فارسی مقاله: ارتقا تطبیقی (آدابوست) حساس به هزینه و وابسته به نمونه
عنوان انگلیسی مقاله: Example-dependent cost-sensitive adaptive boosting
مجله/کنفرانس: سیستم های خبره با کابردهای مربوطه – Expert Systems with Applications
رشته های تحصیلی مرتبط: مهندسی کامپیوتر
گرایش های تحصیلی مرتبط: الگوریتم و محاسبات، هوش مصنوعی
کلمات کلیدی فارسی: یادگیری حساس به هزینه، طبقه بندی حساس به هزینه و وابسته به نمونه، تنظیم طبقه بندی، ارتقا تطبیقی، تشخیص تقلب
کلمات کلیدی انگلیسی: Cost-sensitive learning، Example-dependent cost-sensitive classifier، Classifier calibration، Adaptive boosting، AdaBoost، Fraud detection
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.eswa.2019.06.009
دانشگاه: National Research University Higher School of Economics, 20 Myasnitskaya, Moscow101000, Russian Federation
صفحات مقاله انگلیسی: 12
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 5.891 در سال 2018
شاخص H_index: 162 در سال 2019
شاخص SJR: 1.190 در سال 2018
شناسه ISSN: 0957-4174
شاخص Quartile (چارک): Q1 در سال 2018
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
آیا این مقاله مدل مفهومی دارد: ندارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: ندارد
کد محصول: E13554
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست مطالب (انگلیسی)

Abstract

1. Introduction

2. Related works review

3. Example-dependent cost-sensitive AdaBoost algorithms

4. Experiments

5. Discussion

6. Conclusion and future works

Conflicts of interest

CRediT authorship contribution statement

References

بخشی از مقاله (انگلیسی)

Abstract

Intelligent computer systems aim to help humans in making decisions. Many practical decision-making problems are classification problems in their nature, but standard classification algorithms often not applicable since they assume balanced distribution of classes and constant misclassification costs. From this point of view, algorithms that consider the cost of decisions are essential since they are more consistent with the requirements of real life. These algorithms generate decisions that directly optimize parameters valuable for business, for example, the costs savings. But despite on practical value of cost-sensitive algorithms, the little number of works study this problem concentrating mainly on the case when the cost of a classifier error is constant and does not depend on a specific example. However, many real-world classification tasks are example-dependent cost-sensitive (ECS), where the costs of misclassification vary between examples and not only within classes. Existing methods of ECS learning include just modifications of the simplest models of machine learning (naive Bayes, logistic regression, decision tree). These models produce promising results, but there is a need for further improvement in performance that can be achieved by using gradient-based ensemble methods. To break this gap, we present the ECS generalization of AdaBoost. We study three models which differ by the ways to introduce cost into the loss function: inside the exponent, outside the exponent, and both inside and outside the exponent. The results of the experiments on three synthetic and two real datasets (bank marketing and insurance fraud) show that example-dependent cost-sensitive modifications of AdaBoost outperform other known models. Empirical results also show that critical factors influencing the choice of the model are not only the distribution of features, which is typical for cost-insensitive and class-dependent cost-sensitive problems but also the distribution of costs. Next, since the outputs of AdaBoost are not well calibrated posterior probabilities, we check three approaches to calibration of classifier scores: Platt scaling, isotonic regression, and ROC modification. The results show that calibration not only significantly improves the performance of specific ECS models but allows making better capabilities of original AdaBoost. Obtained results provide new insight regarding the behavior of the cost-sensitive model from a theoretical point of view and prove that the presented approach can significantly improve the practical design of intelligent systems.

Introduction

In practice, decision making often comes down to the problem of classification. The responsible person must determine to what known class the particular object belongs, and the assigned class label defines possible scenarios of actions. For example, the loan manager first analyzes the data of the borrower to determine the level of risk (for example, low, medium or high) and then selects the terms of the loan agreement based of risk level assigned. Similar tasks arise in all other areas of human activity. Since the number of factors influencing a decision can be huge and their relationships are complicated, computer methods are widely used to solve the classification problem. But practitioners often face problems that cannot be solved by standard algorithms, since these methods assume a balanced class distribution and equal misclassification costs (He & Garcia, 2009). In real life, the most typical situation is when the number of examples of one class is much smaller (10 or more times) than the number of instances of another. Moreover, the minor class includes those objects whose identification is of particular interest (Liu & Zhou, 2006; Zadrozny & Elkan, 2001a): insurance fraudsters (Abdallah, Maarof, & Zainal, 2016), dishonest borrowers (Abellan & Castellano, 2017), fraudulent credit card transactions (Sahin, Bulkan, & Duman, 2013), patients with a specific diagnosis (Sun, Kamel, Wong, & Wang, 2007), etc. Besides, it is evident that potential financial losses depend on the type of classifier error. Approval of a loan to a fraudster leads to higher losses than the denial to a bona fide borrower.