کاهش معماری مبتنی بر c-means فازی با استفاده از یک شبکه عصبی
ترجمه نشده

کاهش معماری مبتنی بر c-means فازی با استفاده از یک شبکه عصبی

عنوان فارسی مقاله: کاهش معماری مبتنی بر c-means فازی با استفاده از یک شبکه عصبی احتمالی
عنوان انگلیسی مقاله: Fuzzy c-means-based architecture reduction of a probabilistic neural network
مجله/کنفرانس: شبکه های عصبی - Neural Networks
رشته های تحصیلی مرتبط: مهندسی کامپیوتر، فناوری اطلاعات
گرایش های تحصیلی مرتبط: معماری کامپیوتری، هوش مصنوعی، شبکه های کامپیوتری
کلمات کلیدی فارسی: شبکه عصبی احتمالی، c-means فازی، کاهش معماری، طبقه بندی
کلمات کلیدی انگلیسی: probabilistic neural network، fuzzy c–means، architecture reduction، classification
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.neunet.2018.07.012
دانشگاه: Faculty of Electrical and Computer Engineering - Rzeszow University of Technology - Poland
صفحات مقاله انگلیسی: 31
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2018
ایمپکت فاکتور: 8/446 در سال 2017
شاخص H_index: 121 در سال 2019
شاخص SJR: 2/359 در سال 2017
شناسه ISSN: 0893-6080
شاخص Quartile (چارک): Q1 در سال 2017
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: بله
کد محصول: E10733
فهرست مطالب (انگلیسی)

Abstract

1- Introduction

2- Probabilistic neural network

3- Fuzzy c-means algorithm

4- Proposed algorithm

5- Input data sets

6- Parameter settings

7- Simulation experiments

8- Conclusions

References

بخشی از مقاله (انگلیسی)

Abstract

The efficiency of the probabilistic neural network (PNN) is very sensitive to the cardinality of a considered input data set. It results from the design of the network’s pattern layer. In this layer, the neurons perform an activation on all input records. This makes the PNN architecture complex, especially for big data classification tasks. In this paper, a new algorithm for the structure reduction of the PNN is put forward. The solution relies on performing a fuzzy c-means data clustering and selecting PNN’s pattern neurons on the basis of the obtained centroids. Then, to activate the pattern neurons, the algorithm chooses input vectors for which the highest values of the membership coefficients are determined. The proposed approach is applied to the classification tasks of repository data sets. PNN is trained by three different classification procedures: conjugate gradients, reinforcement learning and the plugin method. Two types of kernel estimators are used to activate the neurons of the network. A 10-fold cross validation errors for the original and the reduced PNNs are compared. Received results confirm the validity of the introduced algorithm.

Introduction

It is known that the complexity of the PNN’s architecture proposed by Specht (1990) is high. This complexity is an effect of using all of the input vectors to activate the neurons in the network’s pattern layer. Therefore, to date, considerable research attention has been paid to the structure optimization of the PNN. For example, Burrascano (1990) applies the learning vector quantization procedure to find representative patterns that can be used to build neurons in PNNs. This procedure defines a number of records that are reference vectors, which approximate the probability density functions of the input classes. Chtioui et al. (1996) present cardinality reduction of the input data for a PNN by hierarchical clustering. The solution utilizes the technique of the reciprocal neighbours, allowing for the concentration of examples that are closest to each other. Zaknich (1997) introduces the quantization method for PNN structure simplification. The input space is split into a hypergrid of a fixed-size; for all hypercubes, representative cluster centers are determined. The number of training vectors in each hyper-cube is therefore reduced to one. Chang et al. (2008) show an expectation-maximization (EM) method as the training algorithm for a PNN. The idea relies on predefining a deterministic number of clusters as the input data set. A global k–means algorithm is used as the solution. Kusy and Kluska (2013, 2017) present an application of k–means clustering and support vector machines to simplify the PNN’s architecture. The appropriately selected centroids and the support vectors are chosen to construct the pattern neurons for the network.