Abstract
Graphical abstract
1. Introduction
2. Related work
3. Datasets
4. Performance evaluation
5. Experimental setup
6. Results
7. Discussion
8. Conclusions
CRediT authorship contribution statement
Acknowledgments
Appendix A. Hard negative mining provides an upper bound for the AUC
Supplementary material
Appendix B. Supplementary materials
Research Data
References
Abstract
Unsupervised near-duplicate detection has many practical applications ranging from social media analysis and web-scale retrieval, to digital image forensics. It entails running a threshold-limited query on a set of descriptors extracted from the images, with the goal of identifying all possible near-duplicates, while limiting the false positives due to visually similar images. Since the rate of false alarms grows with the dataset size, a very high specificity is thus required, up to 1–۱۰−۹ for realistic use cases; this important requirement, however, is often overlooked in literature. In recent years, descriptors based on deep convolutional neural networks have matched or surpassed traditional feature extraction methods in content-based image retrieval tasks. To the best of our knowledge, ours is the first attempt to establish the performance range of deep learning-based descriptors for unsupervised near-duplicate detection on a range of datasets, encompassing a broad spectrum of near-duplicate definitions. We leverage both established and new benchmarks, such as the Mir-Flick Near-Duplicate (MFND) dataset, in which a known ground truth is provided for all possible pairs over a general, large scale image collection. To compare the specificity of different descriptors, we reduce the problem of unsupervised detection to that of binary classification of near-duplicate vs. not-near-duplicate images. The latter can be conveniently characterized using Receiver Operating Curve (ROC). Our findings in general favor the choice of fine-tuning deep convolutional networks, as opposed to using off-the-shelf features, but differences at high specificity settings depend on the dataset and are often small. The best performance was observed on the MFND benchmark, achieving 96% sensitivity at a false positive rate of 1.43 × ۱۰−۶٫
Introduction
Near-duplicate (ND) image detection or discovery entails finding altered or alternative versions of the same image or scene in a large scale collection. This technique has plenty of practical applications, ranging from social media analysis and web-scale retrieval, to digital image forensics. Our work was motivated in particular by applications in the latter domain, as detecting the re–use of photographic material is a key component of several passive image forensics techniques. Examples include detection of copyright infringements (Chiu, Li, & Hsieh, 2012; Ke, Sukthankar, & Huston, 2004; Zhou, Wang, Wu, Yang, & Sun, 2017), digital forgery attacks such as cut-and-paste, copy-move and splicing (Chennamma, Rangarajan, & Rao, 2009; Hirano, Garcia, Sukthankar, & Hoogs, 2006), analysis of media devices seized during criminal investigations (Battiato, Farinella, Puglisi, & Ravì, ۲۰۱۴; Connor & Cardillo, 2016), tracing the online origin of sequestered content (Amerini, Uricchio, & Caldelli, 2017; de Oliveira et al., 2016), and fraud detection (Cicconet, Elliott, Richmond, Wainstock, & Walsh, 2018; Li, Shen, & Dong, 2018). In all the above-mentioned applications, we cannot resort to standard hashing techniques, given that even minimal alterations would make different copies untraceable. Similarly, it is not possible to rely on associated text, tags or taxonomies for retrieval, as done for instance in Gonçalves, Guilherme, and Pedronette (2018), since they would likely change in different sites or devices where content is used. Images may be subject to digital forgery, with parts of one or more existing images combined to create fake ones.