معماری نظارت شده شبکه های پیچشی یکپارچه
ترجمه نشده

معماری نظارت شده شبکه های پیچشی یکپارچه

عنوان فارسی مقاله: چند مسیری-DenseNet: اثر کلی معماری نظارت شده شبکه های پیچشی یکپارچه متراکم
عنوان انگلیسی مقاله: Multipath-DenseNet: A Supervised ensemble architecture of densely connected convolutional networks
مجله/کنفرانس: علوم اطلاعات - Information Sciences
رشته های تحصیلی مرتبط: مهندسی کامپیوتر
گرایش های تحصیلی مرتبط: معماری سیستم های کامپیوتری، مهندسی نرم افزار، هوش مصنوعی
کلمات کلیدی فارسی: طبقه بندی تصویر، شبکه عصبی، یادگیری عمیق
کلمات کلیدی انگلیسی: Image classification، Neural network، Deep-learning
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
نمایه: Scopus - Master Journals List - JCR
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.ins.2019.01.012
دانشگاه: Department of Computer Science and Engineering, Korea University, Seoul, South Korea
صفحات مقاله انگلیسی: 10
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 6/774 در سال 2018
شاخص H_index: 154 در سال 2019
شاخص SJR: 1/620 در سال 2018
شناسه ISSN: 0020-0255
شاخص Quartile (چارک): Q1 در سال 2018
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
آیا این مقاله مدل مفهومی دارد: ندارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: ندارد
کد محصول: E11559
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست مطالب (انگلیسی)

Abstract

1- Introduction

2- Related work

3- Methodology

4- Experiments

5- Results and discussion

6- Conclusion

References

بخشی از مقاله (انگلیسی)

Abstract

Deep networks with skip-connections such as ResNets have achieved great results in recent years. DenseNet exploits the ResNet skip-connections by connecting each layer in convolution neural network to all preceding layers and achieves state-of-the-art accuracy. It is well-known that deeper networks are more efficient and easier to train than shallow or wider networks. Despite the high performance of very deep networks, they are limited in terms of vanishing gradient, diminishing forward flow, and slower training time. In this paper, we propose to combine the benefits of the depth and width of networks. We train supervised independent shallow networks on the same input in a block fashion. We use a state-of-the-art DenseNet block to increase the number of paths for gradient flow. Our proposed architecture has several advantages over other deeper networks including DenseNet; our architecture which we call Multipath-DenseNet is deeper as well as wider, reduces training time, and uses a smaller number of parameters. We evaluate our proposed architecture on the following four object recognition datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The evaluation results show that Multipath-DenseNet achieves significant improvement in performance over DenseNet on the benchmark datasets.

Introduction

Convolutional networks have been employed in research for many years. They have been successfully applied to image processing [1], natural language processing (NLP) [2], and recommender systems [3]. Research on convolutional neural networks has resulted in outstanding algorithms such as LeNet [4], AlexNet [5], VGGNet [6], ResNet [7], and GoogLeNet [8]. Highway [9] and ResNet [7] are considered to be the pioneer networks that were proposed to extract features from over 100 layers. Training very deep neural networks can be challenging due to the vanishing gradient problem. ResNet proposed skip-connection for deeper layers in order to make very deep neural networks. Stochastic depth algorithm proposed by [10] proved that depth is not the only parameter behind the success of residual networks (ResNets). The authors [10] proposed to shorten network depth by randomly skipping the layers in residual networks. Wide residual network [11] followed the similar hypothesis that depth is not the only important parameter. They shorten the network depth too but increased the number of features at individual layer of the network that makes a wider neural network.