Abstract
1- Introduction
2- Related work
3- The proposed method
4- Discussion
5- Experiments
6- Conclusions
References
Abstract
Increasing inter-class variance and shrinking intra-class distance are two main concerns and efforts in face recognition. In this paper, we propose a new loss function termed inter-class angular margin (IAM) loss aiming to enlarge the inter-class variance. Instead of restricting the inter-class margin to be a constant in existing methods, our IAM loss adaptively penalizes smaller inter-class angles more heavily and successfully makes the angular margin between classes larger, which can significantly enhance the discrimination of facial features. The IAM loss can be readily introduced as a regularization term for the widely-used Softmax loss and its recent variants to further improve their performances. We also analyze and verify the appropriate range of the regularization hyper-parameter from the perspective of backpropagation. For illustrative purposes, our model is trained on CASIA-WebFace and tested on the LFW, CFP, YTF and MegaFace datasets; the experimental results show that the IAM loss is quite effective to improve state-of-the-art algorithms.
Introduction
Convolutional neural networks (CNNs) are widely used for face recognition [1–15], in which recent researches have been focused on increasing the inter-class variance and reducing the intra-class distance. A typical pipeline of using a network for training WebFace can be found in Fig. 1, in which the network is trained with the loss function in the last layer, and the representation in the penultimate layer is used as the feature of human faces. Hence the recent efforts and achievements in increasing the interclass variance and reducing the intra-class distance can be summarized into two categories. First, to optimize the Euclidean distance between facial features, mainly through regularization. For example, the Triplet loss [6] makes the intra-class Euclidean distance of features shorter than the interclass distance. Wen et al. [16] reduce the intra-class Euclidean distance by adding an extra penalty. The Marginal loss of [17] and our past work [18] limit both intra-class and inter-class Euclidean distances to improve recognition accuracy. The Range loss [19] overcomes the problem of long-tailed data by equalizing intra-class Euclidean distance and increasing inter-class Euclidean distance. Except for the Triplet loss, all above methods add a regularization term on the basis of the Softmax loss, which is generally adjusted via a regularization hyper-parameter.