Abstract
1- Introduction
2- Fixed-point algorithm for GMC estimation and it’s convergence analysis
3- The sliding-window and recursive GMC algorithms
4- Adaptive convex combination of RGMC algorithms
5- Simulation results
6- Conclusions
References
Abstract
Compared with the MSE criterion, the generalized maximum correntropy (GMC) criterion shows a better robustness against impulsive noise. Some gradient based GMC adaptive algorithms have been derived and available for practice. But, the fixed-point algorithm on GMC has not yet been well studied in the literature. In this paper, we study a fixed-point GMC (FP-GMC) algorithm for linear regression, and derive a sufficient condition to guarantee the convergence of the FP-GMC. Also, we apply sliding-window and recursive methods to the FP-GMC to derive online algorithms for practice, these two called sliding-window GMC (SW-GMC) and recursive GMC (RGMC) algorithms, respectively. Since the solution of RGMC is not analyzable, we derive some approximations that fundamentally result in the poor convergence rate of the RGMC in non-stationary situations. To overcome this issue, we propose a novel robust filtering algorithm (termed adaptive convex combination of RGMC algorithms (AC-RGMC)), which relies on the convex combination of two RGMC algorithms with different memories. Moreover, by an efficient weight control method, the tracking performance of the AC-RGMC is further improved, and this new one is called AC-RGMC-C algorithm. The good performance of proposed algorithms are tested in plant identification scenarios with abrupt change under impulsive noise environment.
Introduction
Adaptive filtering algorithms (AFAs) have been successfully applied in various fields such as system identification, channel equation, acoustic echo cancelation, active noise control, and so forth [1, 2, 3]. The most existing AFAs are based on the Gaussian scenarios justified by the central limit theorem, such as the family of least mean-square (LMS) algorithms, the class of affine projection algorithms (APA), and the tribe of recursive least squares (RLS) algorithms [3, 4]. Among these algorithms, the LMS is the most widely used filtering algorithm because of its simplicity, the RLS can accelerate the convergence rate of the LMS in the presence of colored input signals, and the APA appears as intermediate complexity between the LMS and the RLS. Note that almost all aforementioned algorithms performances may degrade dramatically under non-Gaussian distri butions, such as the light-tailed (e.g., binary, uniform, etc.) and the fat-tailed (e.g., Laplace, Cauchy, mixed Gaussian, alpha-stable, etc.) distributions [5, 6, 7, 8, 9, 10]. Generally, different p-powers of the error signal, i.e., |e| p , are used as cost functions to obtain robust algorithms [5, 11, 12, 13]. Specifically, when the desired signals are contaminated by the non-Gaussian interferences with light-tailed distribution, a high-order power of the error is usually more desirable to achieve a better tradeoff between the transient and steady-state performance. For example, the least mean absolute third (LMAT) algorithm with p = 3 [14], and the least mean fourth (LMF) algorithm with p = 4 [15].