The sine cosine algorithm (SCA) is a relatively novel population-based optimization technique that has been proven competitive with other algorithms and it has received significant interest from researchers in different fields. However, similar to other population-based algorithms, SCA tends to be trapped in local optima and unbalanced exploitation. Additionally, to our limited knowledge, the present SCA and its variants have not been applied to the high-dimensional global optimization problems. This paper presents an improved version of the SCA (ISCA) for solving high-dimensional problems. A modified position-updating equation by introducing inertia weight is proposed to accelerate convergence and avoid falling into the local optima. In addition, to balance the exploration and exploitation of the SCA, we present a new nonlinear conversion parameter decreasing strategy based on the Gaussian function. The effectiveness of the proposed ISCA is evaluated using 24 benchmark high-dimensional (D = 30, 100, 500, 1000, and 5000) functions, large-scale global optimization problems from the IEEE CEC2010 competition, and several real-world engineering applications. The comparisons show that the proposed ISCA can better escape from local optima with faster convergence than both the traditional SCA and other population-based algorithms.
During the past decade, high-dimensional optimization problems have become a popular study area and various populationbased algorithms have been applied to solve these problems. Wang and Gao (2014) presented a DE algorithm with a cooperative co-evolutionary selection operator for solving high-dimensional optimization problems. The proposed algorithm designed a local selection operator by decomposing the high-dimensional problem into a number of subproblems and assigning a local fitness function to evaluate each subcomponent. In Long, Jiao, Liang, and Tang (2018), an exploration-enhanced grey wolf optimizer (EEGWO) was proposed to solve high-dimensional numerical optimization problems. The proposed EEGWO designed a modified position-updating equation to enhance the exploration and presented a nonlinear control parameter strategy to balance the exploration and exploitation. In Trivedi, Srinivasan, Biswas, and Reindl (2015), a novel hybrid GA and differential evolution (hGADE) was proposed to solve high-dimensional unit commitment scheduling problems. In hGADE, a problem specific heuristic initial population generation method and a replacement strategy based on the preservation of in-feasible solutions in the population were incorporated to enhance the search ability of the hybridized variant. Tuo et al. (2015) presented a novel harmony search algorithm based on dynamic dimensionality-reduction adjustment and dynamic fret width strategy for high-dimensional multimodal optimization problems. Li, Wang, Yan, and Li (2015) proposed a hybrid algorithm based on particle swarm and artificial bee colony (PS-ABC) for high-dimensional optimization problems. The proposed PS-ABC combined the local search phase in PSO with two global search phases in ABC for the global optimum. In Chu, Gao, and Sorooshian (2011), an innovative evolutionary algorithm called the shuffled complex evolution with principal components analysis-University of California at Irvine (SP-UCI) was introduced to solve high-dimensional global optimization problems. Rodriguez, Lozano, Blum, and Garcia-Martinez (2013) presented an iterated greedy algorithm to solve large-scale unrelated parallel machines scheduling problems. In Xue, Zhong, Zhuang, and Xu (2014), an ensemble evolution algorithm with self-adaptive learning population search techniques (EEA-SLPS) was proposed to solve high-dimensional optimization problems. In the EEASLPS, the population was divided into three subpopulations and three sub-algorithms were adopted to evolve the subpopulations, respectively. Liu and Zhou (2014) presented an improved PSO algorithm based on combining the simulated annealing (SA), coevolution theory, quantum behavior theory, and diversity-guided mutation strategy to solve high-dimensional complex problems. In Wang, Rahnamayan, and Wu (2013), a parallel DE with selfadapting control parameters and generalized opposition-based learning was proposed to solve high-dimensional optimization problems.