Abstract
1- Introduction
2- Related works
3- MSPSO
4- Experimental verification and comparisons
5- Conclusion
References
Abstract
This paper proposes a multi-swarm particle swarm optimization (MSPSO) that consists of three novel strategies to balance the exploration and exploitation abilities. The new proposed MSPSO in this work is based on multiple swarms framework cooperating with the dynamic sub-swarm number strategy (DNS), sub-swarm regrouping strategy (SRS), and purposeful detecting strategy (PDS). Firstly, the DNS divides the entire population into many sub-swarms in the early stage and periodically reduces the number of sub-swarms (i.e., increase the size of each sub-swarm) along with the evolutionary process. This is helpful for balancing the exploration ability early and the exploitation ability late, respectively. Secondly, in each DNS period with special number of sub-swarms, the SRS is to regroup these sub-swarms based on the stagnancy information of the global best position. This is helpful for diffusing and sharing the search information among different sub-swarms to enhance the exploitation ability. Thirdly, the PDS is relying on some historical information of the search process to detect whether the population has been trapped into a potential local optimum, so as to help the population jump out of the current local optimum for better exploration ability. The comparisons among MSPSO and other 13 peer algorithms on the CEC2013 test suite and 4 real applications suggest that MSPSO is a very reliable and highly competitive optimization algorithm for solving different types of functions. Furthermore, the extensive experimental results illustrate the effectiveness and efficiency of the three proposed strategies used in MSPSO.
Introduction
Particle swarm optimization algorithm (PSO) is a widely known evolutionary algorithm proposed by Kennedy and Eberhart in 1995 [1, 2]. During the optimization process, each particle adjusts its flight direction and step-size re5 lying on the information extracted from the past experience of itself and its neighbors. Although the search pattern of each particle is quite simple, the search behavior of the entire population is very complex, and the population shows great intelligence owing to the cooperative behavior among particles. Due to the simplicity of implementation, PSO has been applied for many academic 10 and real-world applications [3, 4, 5]. Extensive studies reveal that PSO’s performance mainly depends upon its two characteristics [6, 7]: exploration and exploitation. However, there is a contradiction between the two capabilities. In order to be successful, PSO needs to establish a good ratio between exploration and exploitation. A common 15 belief is that PSO should start with exploration and then gradually change into exploitation. Hence, many time-varying strategies are proposed to regulate the parameters and neighbor topology involved in PSO. For example, in the most ubiquitous update rules of parameters introduced in [8, 9], three parameters involved in PSO are adjusted based on the iteration 20 numbers aiming to meet different search requirements of different evolutionary stages. The fundament thought of these modifications is tuning particles’ learning weights for their exemplars.