معیار تشخیص تصویر تقریباً تکراری
ترجمه نشده

معیار تشخیص تصویر تقریباً تکراری

عنوان فارسی مقاله: معیار تشخیص تصویر تقریباً تکراری بدون نظارت
عنوان انگلیسی مقاله: Benchmarking unsupervised near-duplicate image detection
مجله/کنفرانس: سیستم های خبره با کابردهای مربوطه – Expert Systems with Applications
رشته های تحصیلی مرتبط: مهندسی کامپیوتر، مهندسی فناوری اطلاعات
گرایش های تحصیلی مرتبط: هوش مصنوعی، شبکه های کامپیوتری
کلمات کلیدی فارسی: تشخیص تقریبا تکراری، شبکه های عصبی پیچشی، بازیابی در سطح نمونه، تشخیص نظارت نشده، تجزیه و تحلیل عملکرد، جرم شناسی تصویری
کلمات کلیدی انگلیسی: Near-duplicate detection، Convolutional neural networks، Instance-level retrieval، Unsupervised detection، Performance analysis، Image forensics
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.eswa.2019.05.002
دانشگاه: Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
صفحات مقاله انگلیسی: 14
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 5.891 در سال 2018
شاخص H_index: 162 در سال 2019
شاخص SJR: 1.190 در سال 2018
شناسه ISSN: 0957-4174
شاخص Quartile (چارک): Q1 در سال 2018
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
آیا این مقاله مدل مفهومی دارد: ندارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: ندارد
کد محصول: E13573
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست مطالب (انگلیسی)

Abstract

Graphical abstract

1. Introduction

2. Related work

3. Datasets

4. Performance evaluation

5. Experimental setup

6. Results

7. Discussion

8. Conclusions

CRediT authorship contribution statement

Acknowledgments

Appendix A. Hard negative mining provides an upper bound for the AUC

Supplementary material

Appendix B. Supplementary materials

Research Data

References

بخشی از مقاله (انگلیسی)

Abstract

Unsupervised near-duplicate detection has many practical applications ranging from social media analysis and web-scale retrieval, to digital image forensics. It entails running a threshold-limited query on a set of descriptors extracted from the images, with the goal of identifying all possible near-duplicates, while limiting the false positives due to visually similar images. Since the rate of false alarms grows with the dataset size, a very high specificity is thus required, up to 1–۱۰−۹ for realistic use cases; this important requirement, however, is often overlooked in literature. In recent years, descriptors based on deep convolutional neural networks have matched or surpassed traditional feature extraction methods in content-based image retrieval tasks. To the best of our knowledge, ours is the first attempt to establish the performance range of deep learning-based descriptors for unsupervised near-duplicate detection on a range of datasets, encompassing a broad spectrum of near-duplicate definitions. We leverage both established and new benchmarks, such as the Mir-Flick Near-Duplicate (MFND) dataset, in which a known ground truth is provided for all possible pairs over a general, large scale image collection. To compare the specificity of different descriptors, we reduce the problem of unsupervised detection to that of binary classification of near-duplicate vs. not-near-duplicate images. The latter can be conveniently characterized using Receiver Operating Curve (ROC). Our findings in general favor the choice of fine-tuning deep convolutional networks, as opposed to using off-the-shelf features, but differences at high specificity settings depend on the dataset and are often small. The best performance was observed on the MFND benchmark, achieving 96% sensitivity at a false positive rate of 1.43 × ۱۰−۶٫

Introduction

Near-duplicate (ND) image detection or discovery entails finding altered or alternative versions of the same image or scene in a large scale collection. This technique has plenty of practical applications, ranging from social media analysis and web-scale retrieval, to digital image forensics. Our work was motivated in particular by applications in the latter domain, as detecting the re–use of photographic material is a key component of several passive image forensics techniques. Examples include detection of copyright infringements (Chiu, Li, & Hsieh, 2012; Ke, Sukthankar, & Huston, 2004; Zhou, Wang, Wu, Yang, & Sun, 2017), digital forgery attacks such as cut-and-paste, copy-move and splicing (Chennamma, Rangarajan, & Rao, 2009; Hirano, Garcia, Sukthankar, & Hoogs, 2006), analysis of media devices seized during criminal investigations (Battiato, Farinella, Puglisi, & Ravì, ۲۰۱۴; Connor & Cardillo, 2016), tracing the online origin of sequestered content (Amerini, Uricchio, & Caldelli, 2017; de Oliveira et al., 2016), and fraud detection (Cicconet, Elliott, Richmond, Wainstock, & Walsh, 2018; Li, Shen, & Dong, 2018). In all the above-mentioned applications, we cannot resort to standard hashing techniques, given that even minimal alterations would make different copies untraceable. Similarly, it is not possible to rely on associated text, tags or taxonomies for retrieval, as done for instance in Gonçalves, Guilherme, and Pedronette (2018), since they would likely change in different sites or devices where content is used. Images may be subject to digital forgery, with parts of one or more existing images combined to create fake ones.