Nowadays huge volumes of data are produced in the form of fast streams, which are further affected by non-stationary phenomena. The resulting lack of stationarity in the distribution of the produced data calls for efficient and scalable algorithms for online analysis capable of adapting to such changes (concept drift). The online learning field has lately turned its focus on this challenging scenario, by designing incremental learning algorithms that avoid becoming obsolete after a concept drift occurs. Despite the noted activity in the literature, a need for new efficient and scalable algorithms that adapt to the drift still prevails as a research topic deserving further effort. Surprisingly, Spiking Neural Networks, one of the major exponents of the third generation of artificial neural networks, have not been thoroughly studied as an online learning approach, even though they are naturally suited to easily and quickly adapting to changing environments. This work covers this research gap by adapting Spiking Neural Networks to meet the processing requirements that online learning scenarios impose. In particular the work focuses on limiting the size of the neuron repository and making the most of this limited size by resorting to data reduction techniques. Experiments with synthetic and real data sets are discussed, leading to the empirically validated assertion that, by virtue of a tailored exploitation of the neuron repository, Spiking Neural Networks adapt better to drifts, obtaining higher accuracy scores than naive versions of Spiking Neural Networks for online learning environments.
With the increasing number of applications based on fast-arriving information flows and applied to real scenarios (Zhou, Chawla, Jin & Williams, 2014; Alippi, 2014), concept drift has become a paramount issue for online learning environments. The distribution modeling data captured by sensor networks, mobile phones, intelligent user interfaces, industrial machinery and others alike is usually assumed to be stationary along time. However, in many real cases such an assumption does not hold, as the data source itself is subject to dynamic externalities that affect the stationarity of its produced data stream(s), e.g. seasonality, periodicity or sensor errors, among many others. As a result, possible patterns behind the produced data may change over time, either in the feature domain (new features are captured, part of the existing predictors disappear, or their value range evolves), or in the class domain (new classes emerge from the data streams, or some of the existing ones fade along time). This paradigm is what the literature has coined as concept drift, where the term concept refers to a stationary distribution relating a group of features to a set of classes. When the goal is to infer the aforementioned class patterns from data (online classification), incremental models trained over drifting streams become obsolete when transitioning from one concept to another. Consequently, they do not adapt appropriately to the new emerged data distribution, unless they are modified to handle efficiently this unwanted effect. In order to minimize the impact of concept drift on the performance of predictive models, recent studies (Ditzler, Roveri, Alippi & Polikar, 2015; Webb, Hyde, Cao, Nguyen & Petitjean, 2016; Khamassi, Sayed-Mouchaweh, Hammami & Ghedira, 2018) have been focused ´ on the development of efficient techniques for continuous adaptation in evolving environments or, alternatively, in the incorporation of drift detectors and concept forgetting mechanisms (Zliobait ˇ e, Pechenizkiy & Gama, 2016).