Abstract
1- Introduction
2 Related studies
3- Materials and methods
4- Experimental results and discussion
5- Conclusion
References
Abstract
In recent years, the Internet-of-Things (IoT) technology is being used in many application areas such as healthcare, video surveillance, transportation etc. The massive adoption and growth of IoT in these areas are generating a massive amount of data. For example, IoT devices such as cameras are generating a huge amount of images when used in hospital surveillance scenarios. Here, face recognition is an important element that can be used for securing hospital facilities, emotion detection and sentiment analysis of patients, detecting patient fraud, and hospital traffic pattern analysis. Automatic and intelligent face recognition systems have high accuracy in a controlled environment; however, they have low accuracy in an uncontrolled environment. Also, the systems need to operate in real-time in many applications such as smart healthcare. This paper suggests a tree-based deep model for automatic face recognition in a cloud environment. The proposed deep model is computationally less expensive without compromising the accuracy. In the model, an input volume is split into several volumes, where a tree is constructed for each volume. A tree is defined by its branching factor and height. Each branch is represented by a residual function, which is constituted by a convolutional layer, a batch normalization, and a non-linear function. The proposed model is evaluated in various publicly available databases. A comparison of performance is also done with state-of-the-art deep models for face recognition. The results of the experiments demonstrate that the proposed model achieved accuracies of 98.65%, 99.19%, 95.84% on FEI, ORL, and LFW databases, respectively.
Introduction
The introduction of many Internet of Things (IoT) and smart body sensors has increased the volume of data significantly in recent years. The nature of the data is heterogeneous and sparse in most of the cases. The processing of Big Data is a matter of concern for realtime applications [1–3]. Consider a scenario where a person is to be recognized in an airport where there are many sensor cameras. These cameras capture images of many objects including humans in their focus areas, and these are capturing images continuously. This huge amount of image data should be processed in a meaningful way in a cloud environment so that a specified person can easily be recognized. Face recognition is one of the oldest yet a dynamic topics of research. It is necessary for security and biometric applications. Early face recognition systems relied on manual features and traditional classifiers. Some hand-crafted features include local binary pattern (LBP), Weber local descriptors (WLD), principal component analysis (PCA), and histogram of oriented gradients (HOG). Traditional classifiers include support vector machines (SVM), linear discriminant analysis (LDA), and some minimum distant-based classifiers. These features and classifiers work well in a controlled environment, where faces are mostly frontal and with a neutral expression, and having less variation of illumination. However, in many applications such as those related to surveillance, face images may be occluded, not frontal, and having low resolution and high variation of illumination. For these applications, the traditional face recognition systems may not work properly.