Abstract
I. Introduction
II. Related Work
III. BPNN and its Traditional Training
IV. Data-Parallel Training With the Evolution of Local BPNNs
V. Experiments and Evaluation
Authors
Figures
References
Abstract
Owing to its scalability and high fault-tolerance even on a distributed environment built up with personal computers, MapReduce has been introduced to parallelise the training of Back Propagation Neural Networks (BPNNs) on high-dimensional big datasets. Based on the evolution of local BPNNs produced by distributed Map tasks with different data splits, the paper proposes a novel approach to the distributed data-parallel training of BPNNs in MapReduce. The approach provides a reasonable measure to get global convergent BPNN candidates from local BPNNs only convergent on the specific data splits. Further, it not only can reduce the iterations to get the global convergent BPNN, but also shows great advantages in avoiding the training to get trapped into a local optimum on high-dimensional big datasets. To improve the training efficiency further, local BPNNs from the same computing node are merged based on the average of their weight matrices before they act as individuals of the population for the global evolution. Our approach also leverages Random Project based sampling techniques to evaluate the fitness of each individual in order to lower the computation cost in the evolution stage. Experiments show that our proposed approach improves the training efficiency highly compared to the stand-alone or traditional MapReduce BPNN training, and improves model accuracy for larger datasets. The comparison with 23 other popular classification approaches also shows that our proposed approach has big advantages in accuracy.
Introduction
An Artificial Neural Network (ANN) is a computer model to essentially mimic the knowledge acquisition and organisational skills of the human brain, which consists of a number of interconnected processing elements called neurons [1]. The neurons of an ANN are usually arranged into two or more layers logically, and interact with each other via weighted connections. These scalar weights determine the nature and strength of the influence between the interconnected neurons. Each neuron can be connected to all the neurons in the next layer. There is an input layer where data are presented to the neural network, and an output layer that holds the response of the network to the input. It is the intermediate layers, also known as hidden layers, that enable these networks to represent and compute complicated associations between patterns. Neural networks essentially learn through the adaptation of their connection weights according to input data [2]. Back Propagation Neural Network (BPNN), one of the most popular ANNs, employs the back-propagation algorithm for its connection weight adaptation and can approximate any continuous nonlinear functions by arbitrary precision with enough number of neurons [3]. We call this process the training of a neural network and the input data containing potential patterns is called training samples. In the past decades, ANNs have been widely used to model uncertain nonlinear functions [4], [5], and have shown great advantages in pattern recognition, classification and modelling of nonlinear relationships involving a multitude of variables [6].