کاهش حاشیه زاویه ای درون رده ای برای تشخیص چهره
ترجمه نشده

کاهش حاشیه زاویه ای درون رده ای برای تشخیص چهره

عنوان فارسی مقاله: کاهش حاشیه زاویه ای درون رده ای برای تشخیص چهره
عنوان انگلیسی مقاله: Inter-class angular margin loss for face recognition
مجله/کنفرانس: پردازش سیگنال. ارتباط تصویر - Signal Processing. Image Communication
رشته های تحصیلی مرتبط: کامپیوتر
گرایش های تحصیلی مرتبط: هوش مصنوعی، مهندسی نرم افزار، مهندسی الگوریتم ها و محاسبات
کلمات کلیدی فارسی: تشخیص چهره، کاهش IAM، واریانس درون رده، فاصله درون رده ای، کاهش بیشینه نرم افزاری
کلمات کلیدی انگلیسی: Face recognition، IAM loss، Inter-class variance، Intra-class distance، Softmax loss
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
نمایه: Scopus - Master Journals List - JCR
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.image.2019.115636
دانشگاه: Shenzhen Key Lab. of Info. Sci&Tech/Shenzhen Engineering Lab. of IS&DCP, Department of Electronic Engineering/Graduate School at Shenzhen, Tsinghua University, China
صفحات مقاله انگلیسی: 6
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2020
ایمپکت فاکتور: 3/809 در سال 2019
شاخص H_index: 72 در سال 2020
شاخص SJR: 0/562 در سال 2019
شناسه ISSN: 0923-5965
شاخص Quartile (چارک): Q2 در سال 2019
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
آیا این مقاله مدل مفهومی دارد: ندارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: ندارد
کد محصول: E14827
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست مطالب (انگلیسی)

Abstract

1- Introduction

2- Related work

3- The proposed method

4- Discussion

5- Experiments

6- Conclusions

References

بخشی از مقاله (انگلیسی)

Abstract

Increasing inter-class variance and shrinking intra-class distance are two main concerns and efforts in face recognition. In this paper, we propose a new loss function termed inter-class angular margin (IAM) loss aiming to enlarge the inter-class variance. Instead of restricting the inter-class margin to be a constant in existing methods, our IAM loss adaptively penalizes smaller inter-class angles more heavily and successfully makes the angular margin between classes larger, which can significantly enhance the discrimination of facial features. The IAM loss can be readily introduced as a regularization term for the widely-used Softmax loss and its recent variants to further improve their performances. We also analyze and verify the appropriate range of the regularization hyper-parameter from the perspective of backpropagation. For illustrative purposes, our model is trained on CASIA-WebFace and tested on the LFW, CFP, YTF and MegaFace datasets; the experimental results show that the IAM loss is quite effective to improve state-of-the-art algorithms.

Introduction

Convolutional neural networks (CNNs) are widely used for face recognition [1–15], in which recent researches have been focused on increasing the inter-class variance and reducing the intra-class distance. A typical pipeline of using a network for training WebFace can be found in Fig. 1, in which the network is trained with the loss function in the last layer, and the representation in the penultimate layer is used as the feature of human faces. Hence the recent efforts and achievements in increasing the interclass variance and reducing the intra-class distance can be summarized into two categories. First, to optimize the Euclidean distance between facial features, mainly through regularization. For example, the Triplet loss [6] makes the intra-class Euclidean distance of features shorter than the interclass distance. Wen et al. [16] reduce the intra-class Euclidean distance by adding an extra penalty. The Marginal loss of [17] and our past work [18] limit both intra-class and inter-class Euclidean distances to improve recognition accuracy. The Range loss [19] overcomes the problem of long-tailed data by equalizing intra-class Euclidean distance and increasing inter-class Euclidean distance. Except for the Triplet loss, all above methods add a regularization term on the basis of the Softmax loss, which is generally adjusted via a regularization hyper-parameter.