خلاصه
1. معرفی
2. آثار مرتبط
3. نظریه و روش
4. مرحله توسعه مدل
5. مرحله توسعه نمونه اولیه
6. نتایج و بحث
7. نتیجه گیری
بیانیه مشارکت نویسنده CRediT
اعلامیه منافع رقابتی
در دسترس بودن داده ها
منابع
Abstract
1. Introduction
2. Related works
3. Theory and methods
4. Model development stage
5. Prototype development stage
6. Results and discussion
7. Conclusions
CRediT authorship contribution statement
Declaration of competing interest
Data availability
References
چکیده
سیستمهای تشخیص چهره که تشخیص زنده را اجرا نمیکنند، مستعد حملات جعل چهره هستند. این آسیبپذیری نشان میدهد که یک مهاجم میتواند خود را بهعنوان یک فرد دیگر پنهان کند و سیستم بهطور نادرست از حضور آن فرد دیگر استفاده کند. برای جلوگیری از این حملات، یک مرحله تشخیص زنده را می توان قبل از شناسایی سوژه ها اجرا کرد. دستگاه های سیستم حضور و غیاب مبتنی بر تشخیص چهره معمولاً در ورودی یک رویداد یا فضا نصب می شوند، بنابراین داشتن یک دستگاه قابل حمل که به راحتی قابل جابجایی باشد، عملی و کارآمد است. از این رو، سیستم های تشخیص چهره باید به اندازه کافی سبک باشند تا بتوانند روی دستگاه های قابل حمل با قدرت محاسباتی محدود اجرا شوند. پیاده سازی تشخیص زنده بودن زمان پردازش سیستم را افزایش می دهد. بنابراین، این مطالعه با هدف توسعه یک روش تشخیص زنده بودن سبک وزن است که می تواند بر روی Raspberry Pi اجرا شود. برای دستیابی به این هدف، چندین مدل از پیش آموزش دیده مورد ارزیابی قرار گرفت و MobileNetV2 بر اساس نتایج انتخاب شد. سپس مدل MobileNetV2 با استفاده از روش یادگیری انتقالی آموزش داده شد. سیستم حضور و غیاب پیشنهادی به میانگین زمان پردازش زیر 0.6 ثانیه و دقت 96 درصد برای سوژههای زنده، 79 درصد دقت برای حملات جعلی سطح A، 83.7 درصد دقت برای حملات جعلی سطح B، و دقت 70 درصد برای حملات جعلی سطح C دست یافت.
Abstract
Face recognition systems that do not implement liveness detection are susceptible to face spoofing attacks. This vulnerability implies that an attacker could disguise themselves as another individual and the system would falsely take the attendance of that other individual. To prevent these attacks, a liveness detection step can be implemented before recognizing subjects. Face recognition-based attendance system devices are typically installed at the entrance to an event or space, so having a portable device that can be easily relocated is practical and efficient. Hence, face recognition systems should be lightweight enough to be able to run on portable devices with limited computational power. Implementing liveness detection will increase the system's processing time. Therefore, this study aims to develop a lightweight liveness detection method that can be run on a Raspberry Pi. To achieve this, several pre-trained models were evaluated and MobileNetV2 was chosen based on the results. The MobileNetV2 model was then trained using transfer learning method. The proposed attendance system achieved an average processing time below 0.6 s and 96 % accuracy for live subjects, 79 % accuracy for level A spoof attacks, 83.7 % accuracy for level B spoof attacks, and 70 % accuracy for level C spoof attacks.
Introduction
There are two types of attendance systems, manual attendance systems, and automated attendance systems [1]. With manual attendance system, the attendance of each individual present in an event is recorded manually by someone in charge. In an automated attendance system, the tedious process of manually recording everyone's attendance is replaced with an automated system. One way to achieve this is by using an authentication method to verify and record individuals’ attendance. Fingerprint, palm veins, face, and iris recognition are often used due to their false rejection rate and higher acceptance rate [2]. Face recognition is preferred over other biometric authentication methods because of its inherent benefits such as non-intrusive interaction and accessibility [1].
Biometric authentication methods are robust and convenient because it verifies the identity of the subject by using their physiological and/or behavioral characteristics and not the subject's knowledge or possession. Password-based authentication is based on knowledge, which implies that if someone knows the knowledge, they can authenticate as someone else. Ownership-based authentication is based on item possession, which means that someone who owns the object can authenticate as someone else. While password-based authentication and ownership-based authentication have their own security concerns, biometric authentication such as face recognition also has its own security concerns.
Face recognition, one of the biometric identification techniques, uses an analysis of an individual's distinctive facial features to confirm their identity. In most cases, this involves capturing an image or a video of the person's face, after which algorithms are used to extract and evaluate particular facial features, such the distance between the eyes, the curves of the nose, and the jawline. To see if there is a match, these features are then put in comparison with a database of recognized faces. Face recognition systems are vulnerable to face spoofing attacks, where the attacker would try to gain illegitimate access by presenting a fake face of an authorized individual [2].
Conclusions
Face recognition systems, especially those that do not implement liveness detection, are vulnerable to face spoofing attacks where attackers would try to gain access by presenting a fake face of another individual. On face recognition-based attendance systems, liveness detection is used to prevent individual attendance that is not really present to be taken. Attendance systems typically run on a portable device as it is more practical and efficient because it could be relocated to any appropriate location as needed. Having a limitation of computing power on portable devices, implementing liveness detection will be a challenge as the liveness detection method that will be implemented must be lightweight so the attendance system could still be run on portable devices. By adding a liveness detection step, the processing time of each face presented to the system will be impacted.
In this study, several pre-trained CNN models were used to perform experiments. The pre-trained models used were pre-trained models trained for face recognition and object recognition. Transfer learning with several variations of new layers was performed on MobileNetV2, FaceNet, and MobileFaceNet. From the results of the experiment, variation C obtained the best result. While MobileFaceNet has a lower average processing time, MobileNetV2 was chosen because the average processing time difference is small, has better accuracy on CelebA-Spoof dataset which has a larger range of variations and has decent accuracy on NUAA dataset.