نمایش ویژگی از طریق یادگیری فیلتر چند ساختاری بدون نظارت
ترجمه نشده

نمایش ویژگی از طریق یادگیری فیلتر چند ساختاری بدون نظارت

عنوان فارسی مقاله: فیلترهای مخصوص عمیق برای تشخیص چهره: نمایش ویژگی از طریق یادگیری فیلتر چند ساختاری بدون نظارت
عنوان انگلیسی مقاله: Deep eigen-filters for face recognition: Feature representation via unsupervised multi-structure filter learning
مجله/کنفرانس: تشخیص الگو - Pattern Recognition
رشته های تحصیلی مرتبط: کامپیوتر
گرایش های تحصیلی مرتبط: مهندسی الگوریتم ها و محاسبات، هوش مصنوعی، معماری سیستم های کامپیوتری
کلمات کلیدی فارسی: فیلترهای مخصوص عمیق، کرنل های پیچشی، تشخیص چهره، شبکه های عصبی پیچشی، نمایش ویژگی
کلمات کلیدی انگلیسی: Deep eigen-filters، Convolution kernels، Face recognition، Convolutional neural networks، Feature representation
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
نمایه: Scopus - Master Journals List - JCR
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.patcog.2019.107176
دانشگاه: Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2020
ایمپکت فاکتور: 7/346 در سال 2019
شاخص H_index: 180 در سال 2020
شاخص SJR: 1/363 در سال 2019
شناسه ISSN: 0031-3203
شاخص Quartile (چارک): Q1 در سال 2019
فرمت مقاله انگلیسی: PDF
تعداد صفحات مقاله انگلیسی: 38
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: بله
آیا این مقاله مدل مفهومی دارد: دارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: ندارد
کد محصول: E14724
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست انگلیسی مطالب

Abstract


1- Introduction


2- Related work


3- Deep eigen-filters and DEFNet for feature representation


4- Experiments and results


5- Analysis on the strategy of proposed deep eigen-filters approach


6- Conclusion


References

نمونه متن انگلیسی مقاله

Abstract


Training deep convolutional neural networks (CNNs) often requires high computational cost and a large number of learnable parameters. To overcome this limitation, one solution is computing predefined convolution kernels from training data. In this paper, we propose a novel three-stage approach for filter learning alternatively. It learns filters in multiple structures including standard filters, channel-wise filters and point-wise filters which are inspired from variations of CNNs’ convolution operations. By analyzing the linear combination between learned filters and original convolution kernels in pre-trained CNNs, the reconstruction error is minimized to determine the most representative filters from the filter bank. These filters are used to build a network followed by HOG-based feature extraction for feature representation. The proposed approach shows competitive performance on color face recognition compared with other deep CNNs-based methods. Besides, it provides a perspective of interpreting CNNs by introducing the concepts of advanced convolutional layers to unsupervised filter learning.


Introduction


With the development of deep learning in recent years, deep neural networks, especially deep convolutional neural networks (CNNs) have achieved state-of-the-art performance in many image-based applications [1], e.g., image classification [2, 3], face recognition [4, 5], fine-grained image categorization [6, 7] and depth estimation [8, 9]. Compared with traditional visual recognition methods, CNNs have the advantage of learning both low-level and high-level feature representations automatically instead of designing hand-crafted feature descriptors [10, 11]. Due to these powerful features, CNNs have revolutionized the computer vision community and become one of the most popular tools in many visual recognition tasks [7, 12, 13]. Generally, CNNs are made up of three types of layers, i.e. convolutional layers, pooling layers, and fullyconnected layers. The features are extracted by stacking many convolutional layers on top of each other, and backpropagation starts from the loss function and goes back to the input in order to learn the weights and biases contained in the layers. However, how this kind of mechanism works on images remains an open question and yet needs to be explored. Besides, learning powerful feature representations requires a large amount of labeled training data otherwise the performance may deteriorate [14, 15], whereas training data in practical applications are often not readily available. To solve these problems, some researchers propose learning convolutional layers alternatively independent of training data. In [16], ScatNet was proposed by using wavelet transforms to represent convolutional filters. These predefined wavelet transforms are cascaded with nonlinear and pooling operations to build a multilayer convolutional network. Therefore, no learning is needed in computing image representation. Different from ScatNet, researchers in [17] introduced a structured receptive field network that combines the flexible learning property of CNNs and the fixed basis filters.

  • اشتراک گذاری در

دیدگاه خود را بنویسید:

تاکنون دیدگاهی برای این نوشته ارسال نشده است

نمایش ویژگی از طریق یادگیری فیلتر چند ساختاری بدون نظارت
نوشته های مرتبط
مقالات جدید
پیوندها