Abstract
I.Introduction
II.Distrubuting SVM with Mapreduce
III.Ontology For Accuracy Augmentation
IV.Experimental Results
V.Conclusions And Future Work
Abstract
Spam continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) based techniques have been proposed for spam classification. However, SVM training is a computationally intensive process. This paper presents a parallel SVM algorithm for scalable spam filtering. By distributing, processing and optimizing the subsets of the training data across multiple participating nodes, the distributed SVM reduces the training time significantly. Ontology based concepts are also employed to minimize the impact of accuracy degradation when distributing the training data amongst the SVM classifiers.
INTRODUCTION
Support Vector Machine (SVM) based approaches have persistently gained popularity in terms of their application for text classification and machine learning [1], [2]. Classification in SVM based approaches is founded on the notion of hyperplanes [3]. The hyperplanes act as class segregators in common binary classification, such as spam or ham in the context of spam filtering. SVM training is a computationally intensive process. Numerous SVM formulations, solvers and architectures for improving SVM performance have been explored and proposed [4], [5] including distributed and parallel computing techniques. SVM decomposition is another widespread technique for improving the performance in SVM training [6], [7]. Decomposition approaches work on the basis of identifying a small number of optimization variables and tackling a set of fixed size problems. Another widespread and effective practice is to split the training data into smaller fragments and use a number of SVM’s to process the individual data chunks. This in turn reduces overall training time. Various forms of summarizations and aggregations are then performed to process the final set of global support vectors [8]. Numerous forms of decomposition which are based on a data splitting strategy approach can suffer from issues including convergence and accuracy. Challenges related to chunk aliasing as well as outlier accumulation tend to intensify problems in a distributed SVM context. Adopting a training data set splitting strategy commonly amplifies issues related to data imbalance and data distribution instability.