حداکثر کورنتروپی تعمیم یافته نقطه ثابت
ترجمه نشده

حداکثر کورنتروپی تعمیم یافته نقطه ثابت

عنوان فارسی مقاله: حداکثر کورنتروپی تعمیم یافته نقطه ثابت : تجزیه و تحلیل همگرایی و الگوریتم های ترکیب محدب
عنوان انگلیسی مقاله: Fixed-point generalized maximum correntropy: Convergence analysis and convex combination algorithms
مجله/کنفرانس: پردازش سیگنال - Signal Processing
رشته های تحصیلی مرتبط: مهندسی کامپیوتر
گرایش های تحصیلی مرتبط: مهندسی الگوریتم ها و محاسبات، مهندسی نرم افزار، هوش مصنوعی
کلمات کلیدی فارسی: همگرایی، الگوریتم نقطه ثابت، ترکیب محدب، فیلتر انطباق، نویز ناگاوسی، معیار حداکثر کورنتروپی تعمیم یافته
کلمات کلیدی انگلیسی: Convergence، Fixed-point algorithm، Convex combination، Adaptive filter، Non-Gaussian noise، Generalized maximum correntropy (GMC) criterion
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.sigpro.2018.06.012
دانشگاه: School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
صفحات مقاله انگلیسی: 27
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 3/933 در سال 2017
شاخص H_index: 105 در سال 2019
شاخص SJR: 0/940 در سال 2017
شناسه ISSN: 0165-1684
شاخص Quartile (چارک): Q1 در سال 2017
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: بله
کد محصول: E11145
فهرست مطالب (انگلیسی)

Abstract

1- Introduction

2- Fixed-point algorithm for GMC estimation and it’s convergence analysis

3- The sliding-window and recursive GMC algorithms

4- Adaptive convex combination of RGMC algorithms

5- Simulation results

6- Conclusions

References

بخشی از مقاله (انگلیسی)

Abstract

Compared with the MSE criterion, the generalized maximum correntropy (GMC) criterion shows a better robustness against impulsive noise. Some gradient based GMC adaptive algorithms have been derived and available for practice. But, the fixed-point algorithm on GMC has not yet been well studied in the literature. In this paper, we study a fixed-point GMC (FP-GMC) algorithm for linear regression, and derive a sufficient condition to guarantee the convergence of the FP-GMC. Also, we apply sliding-window and recursive methods to the FP-GMC to derive online algorithms for practice, these two called sliding-window GMC (SW-GMC) and recursive GMC (RGMC) algorithms, respectively. Since the solution of RGMC is not analyzable, we derive some approximations that fundamentally result in the poor convergence rate of the RGMC in non-stationary situations. To overcome this issue, we propose a novel robust filtering algorithm (termed adaptive convex combination of RGMC algorithms (AC-RGMC)), which relies on the convex combination of two RGMC algorithms with different memories. Moreover, by an efficient weight control method, the tracking performance of the AC-RGMC is further improved, and this new one is called AC-RGMC-C algorithm. The good performance of proposed algorithms are tested in plant identification scenarios with abrupt change under impulsive noise environment.

Introduction

Adaptive filtering algorithms (AFAs) have been successfully applied in various fields such as system identification, channel equation, acoustic echo cancelation, active noise control, and so forth [1, 2, 3]. The most existing AFAs are based on the Gaussian scenarios justified by the central limit theorem, such as the family of least mean-square (LMS) algorithms, the class of affine projection algorithms (APA), and the tribe of recursive least squares (RLS) algorithms [3, 4]. Among these algorithms, the LMS is the most widely used filtering algorithm because of its simplicity, the RLS can accelerate the convergence rate of the LMS in the presence of colored input signals, and the APA appears as intermediate complexity between the LMS and the RLS. Note that almost all aforementioned algorithms performances may degrade dramatically under non-Gaussian distri butions, such as the light-tailed (e.g., binary, uniform, etc.) and the fat-tailed (e.g., Laplace, Cauchy, mixed Gaussian, alpha-stable, etc.) distributions [5, 6, 7, 8, 9, 10]. Generally, different p-powers of the error signal, i.e., |e| p , are used as cost functions to obtain robust algorithms [5, 11, 12, 13]. Specifically, when the desired signals are contaminated by the non-Gaussian interferences with light-tailed distribution, a high-order power of the error is usually more desirable to achieve a better tradeoff between the transient and steady-state performance. For example, the least mean absolute third (LMAT) algorithm with p = 3 [14], and the least mean fourth (LMF) algorithm with p = 4 [15].