Abstract
1- Introduction
2- Background
3- Design and implementation of the parallel spectral clustering in Julia
4- Results
5- Discussion
6- Conclusion
References
Abstract
Spectral clustering is widely used in data mining, machine learning and other fields. It can identify the arbitrary shape of a sample space and converge to the global optimal solution. Compared with the traditional k-means algorithm, the spectral clustering algorithm has stronger adaptability to data and better clustering results. However, the computation of the algorithm is quite expensive. In this paper, an efficient parallel spectral clustering algorithm on multi-core processors in the Julia language is proposed, and we refer to it as juPSC. The Julia language is a high-performance, open-source programming language. The juPSC is composed of three procedures: (1) calculating the affinity matrix, (2) calculating the eigenvectors, and (3) conducting k-means clustering. Procedures (1) and (3) are computed by the efficient parallel algorithm, and the COO format is used to compress the affinity matrix. Two groups of experiments are conducted to verify the accuracy and efficiency of the juPSC. Experimental results indicate that (1) the juPSC achieves speedups of approximately 14×∼ 18× on a 24-core CPU and that (2) the serial version of the juPSC is faster than the Python version of scikitlearn. Moreover, the structure and functions of the juPSC are designed considering modularity, which is convenient for combination and further optimization with other parallel computing platforms.
Introduction
In recent years, machine learning has made great progress and become the preferred method for developing practical software, such as computer vision, speech recognition, and natural language processing [37,45,50,55]. Machine learning mainly includes supervised learning and unsupervised learning. Clustering is the main content of unsupervised learning. Among many clustering algorithms, the spectral clustering algorithm has become the popular one [32,53]. Spectral clustering is a technology originating from graph theory [17,29] that uses the edge connecting them to identify the nodes in the graph and allows us to cluster non-graphic data. Unsupervised clustering analysis algorithm can explore the internal group structure of data, which has been widely used in various data analysis occasions, including computer vision analysis, statistical analysis, image processing, medical information processing, biological science, social science, and psychology [19,44,51]. The basic principle of clustering analysis is to divide the data into different clusters. Members in the same cluster have similar characteristics, and members in different clusters have different characteristics. The main types of clustering algorithms include partitioning methods, hierarchical clustering, fuzzy clustering, density-based clustering, and model-based clustering [38]. The most widely used clustering algorithms are k-means [61], DBSCAN [39], ward hierarchical clustering [47], spectral clustering [53], birch algorithm [66], etc. It has been proven that the spectral clustering algorithm is more effective than other traditional clustering algorithms in references [46,56], but in the process of spectral clustering computation, the affinity matrix between nodes needs to be constructed, and storage of the affinity matrix requires much memory. It also takes a long time to achieve the first k eigenvectors of the Laplacian matrix. Thus, the spectral clustering algorithm is difficult to apply in the large-scale data processing.