Abstract
I. Introduction
II. Related Work
III. Approaches
IV. Experimental Results
V. Conclusion
Authors
Figures
References
Abstract
Face inpainting aims to repaired damaged images caused by occlusion or cover. In recent years, deep learning based approaches have shown promising results for the challenging task of image inpainting. However, there are still limitation in reconstructing reasonable structures because of oversmoothed and/or blurred results. The distorted structures or blurred textures are inconsistent with surrounding areas and require further post-processing to blend the results. In this paper, we present a novel generative model-based approach, which consisted by nested two Generative Adversarial Networks (GAN), the subconfrontation GAN in generator and parent-confrontation GAN. The sub-confrontation GAN, which is in the image generator of parent-confrontation GAN, can find the location of missing area and reduce mode collapse as a prior constraint. To avoid generating vague details, a novel residual structure is designed in the sub-confrontation GAN to deliver richer original image information to the deeper layers. The parentconfrontation GAN includes an image generation part and a discrimination part. The discrimination part of parent-confrontation GAN includes global and local discriminator, which benefits the reconstruction of overall coherency of the repaired image while obtaining local details. The experiments are executed over the publicly available dataset CelebA, and the results show that our method outperforms current state-of-the-art techniques quantitatively and qualitatively.
Introduction
Face inpainting is a challenging task of recovering details of facial features on high-level image semantics. It can be applied in many face recognition occasions, such as wearing sunglasses, microphone occlusion during performance, and covering mask. The purpose of inpainting technology is to repair the broken part of the image with known image information. The most important goal of this task is to avoid introducing noise into non-repaired areas and to generate reliable repaired areas. Based on this technique, noise, hiatus and scratch can be removed. Because of the strong correlation between pixels in one image, lost image information can be restored as much as possible based on undamaged or occluded area of the image and its pattern priori. During inpainting process, the content information of the whole image is considered, including lowlevel texture information and high-level semantic information. Traditional inpainting methods rely on low level cues to find best matching patches from the uncorrupted sections in the same image [1]–[3]. These methods work well for background completions and repetitive texture pattern. However, low level features are limited for face inpainting task as face image consists of many unique components, and inpainting process needs to be carried out with a highlevel semantic level [4]–[6]. The traditional methods based on finding patches with similar appearance patches does not always perform well.