Particle swarm optimization using multi-information characteristics of all personal-best information
- Song Huang^{1},
- Na Tian^{3},
- Yan Wang^{1}Email author and
- Zhicheng Ji^{1, 2}
Received: 9 May 2016
Accepted: 6 September 2016
Published: 21 September 2016
Abstract
Convergence stagnation is the chief difficulty to solve hard optimization problems for most particle swarm optimization variants. To address this issue, a novel particle swarm optimization using multi-information characteristics of all personal-best information is developed in our research. In the modified algorithm, two positions are defined by personal-best positions and an improved cognition term with three positions of all personal-best information is used in velocity update equation to enhance the search capability. This strategy could make particles fly to a better direction by discovering useful information from all the personal-best positions. The validity of the proposed algorithm is assessed on twenty benchmark problems including unimodal, multimodal, rotated and shifted functions, and the results are compared with that obtained by some published variants of particle swarm optimization in the literature. Computational results demonstrate that the proposed algorithm finds several global optimum and high-quality solutions in most case with a fast convergence speed.
Keywords
Background
Particle swarm optimization (PSO) is a bio-inspired optimization algorithm introduced by Eberhart and Kennedy (1995), and is enlightened by the interaction and communication of bird flocking or fish schooling. PSO has attracted a great deal of attention as a treatment for high-dimensional nonlinear optimization problem due to its better computational efficiency and simple implementation. With the development of intelligent manufacturing and complex system, many engineering problems are becoming increasingly complex to optimize, and thus time-consuming computation and premature convergence often occurs in complicated optimization process. Therefore, many PSO variants with new techniques have been proposed to address the above problems.
Some researchers got insight into three control parameters, named after acceleration coefficients and inertia weight, to develop PSO variants (Beielstein et al. 2002; Zhang et al. 2014; Shi and Eberhart 1998a, b). In (Shi and Eberhart 1998b), linearly decreasing inertia weight particle swarm optimization (LPSO) was developed by modified inertia weight and the introduction of this dynamic inertia weight highly strengthened the performance of PSO algorithm. In recent research, multiple swarms or multiple layers strategies had already been proved to be an effective strategy to improve the performance of PSO (Sun and Li 2014; Yadav and Deep 2014; Lim and Isa 2014a; Wang et al. 2014). Sun and Li presented a cooperative particle swarm optimization (TCPSO) with two-swarm (the slave swarm and the master swarm) for optimization problem in large scale search space (Sun and Li 2014) and two subswarms using shrinking hypersphere PSO (SHPSO) and DE were also used in new co-swarm PSO for constrained optimization problems (Yadav and Deep 2014). Multiple layers strategies, such as adaptive two-layer particle swarm optimization algorithm with elitist learning strategy (ATLPSO-ELS) (Lim and Isa 2014a) and multi-layer particle swarm optimization (MLPSO) (Wang et al. 2014), were also used to solve complex problems. PSO with different topologies has different exploration/exploitation ability and performance (Bonyadi et al. 2014; Lim and Isa 2014b, c). Many new topology strategies [time-adaptive topology (Bonyadi et al. 2014), adaptive time-varying topology connectivity (Lim and Isa 2014b), increasing topology connectivity (Lim and Isa 2014c)] were also applied to PSO. Comparing with fully-connected topology or regular topology, these topologies could lead to a different optimization process. In recent years, new techniques such as Levy flight (Haklı and Uğuz 2014), parallel cell coordinate system (Hu and Yen 2015), competitive and cooperative (Li et al. 2015) and orthogonal design (Qin et al. 2015) had also been adopted in PSO.
Many learning strategies are introduced to PSO to enhance the adaptability for complex optimization problems as learning behavior stemming from social animals plays a key role in animals’ adaptation to the changing environment (Cheng and Jin 2015; Rao and Patel 2013; Lim and Isa 2014d; e; Shi and Eberhart 1999). Cheng and Jin presented a modified particle swarm optimization using social learning mechanism (SL-PSO) (Cheng and Jin 2015) and some concept of teachers, tutorial training and self motivated learning was proposed in teaching–learning-based PSO by Rao and Patel for performance enhancements (Rao and Patel 2013). Using teaching and peer-learning behaviors, a bidirectional teaching and peer-learning PSO (BTPLPSO) (Lim and Isa 2014d) and a two learning phases PSO (TPLPSO) (Lim and Isa 2014e) were proposed by Lim and Isa simultaneously.
Communication and learning behavior is a distinguishing feature among social animals and it improves social efficiency. Sharing information mechanism plays a key role in this behavior. To share personal-best information fairness, a particle swarm optimizer using several multi-information characteristics of all personal-best information is developed in this paper. In the proposed PSO, two representative positions, which represent the features of all personal-best positions, are defined to acquiring the information of all personal-best positions. Then the cognition term in velocity update equation is formed by three positions. Due to the effect of all personal-best fitnesses, each particle can update its velocity and position by the distribution of personal-best fitnesses. This strategy could make full use of all personal-best information and correct some error guided directions of personal-best positions.
The structure of rest paper is as follows. Section “Particle swarm optimizer” presents the theory and formulation of PSO algorithm and linearly decreasing inertia weight. In section “Particle swarm optimization using all personal-best information”, the details of two representative positions are described and the proposed PSO using several multi-information characteristics of all personal-best positions is provided. Numerical results and statistical analysis are shown in section “Experiments and results”. In section “Conclusions”, we conclude this paper.
Particle swarm optimizer
Velocity and position formulation
Linearly decreasing inertia weight
Particle swarm optimization using all personal-best information
Analysis of personal-best information
Learning behavior is a special skill for social animals, which can share the information with their members. Cooperative behavior of a swarm is more efficient than one taking an action alone due to their fruitful information and communication. In PSO, each particle can provide its personal-best position information to guide its flying direction. The whole personal-best positions of the swarm imply the distribution of fruitful good-fitness-related information. To take full advantage of multi-information characteristics of all personal-best information will contribute to ignoring several particles’ error information trapping in local optima. In the theory of PSO, personal-best position is only used for its own particle in evolutionary process, not reflecting the influence of fitness distribution in landscape. Misguided information of personal-best positions, which have no opportunities to be corrected, will make PSO premature. Therefore, two positions, which add the influence of personal-best fitness distribution, are defined to strengthen the particle’s ability to learn from other particles’ experience. Then cognition term with three defined personal-best positions in velocity update equation is formed to reduce the misguided opportunity. The details of the improved cognition term and the proposed PSO algorithm are as follows.
Detail of improved PSO algorithm
Step 2 Normalization method of personal-best fitness.
Centroid position is defined as weighted sum form of \({\mathbf{p}}_{\text{best}}\) with \(\varvec{\theta}\) to reflect the influence of personal-best fitnesses. Similar to the relation of an object's density and mass in physics, by regarding personal-best fitness as ‘the density of an object’ and personal-best position as ‘the location in the object’, the position \(\varvec{p}_{\text{centr}}\) can be seen as ‘the centroid of the object’. The centroid of an object is important factor to reflect the distribution of mass and thus \(\varvec{p}_{\text{centr}}\) reflects the distribution of high quality fitness. The centroid position is always close to the area where most good fitnesses locate.
Step 5 Calculate median position \(\varvec{p}_{\text{med}}\) of all personal-best positions.
Step 6 Cognitive guiding position \({\mathbf{p}}_{\text{best}}^{\prime }\).
The cognitive guiding position includes three positions, the personal-best position \({\mathbf{p}}_{\text{best}}\), the centroid position \(\varvec{p}_{\text{centr}}\) and the median position \(\varvec{p}_{\text{med}}\). \({\mathbf{p}}_{\text{best}}\) and \(\varvec{p}_{\text{centr}} - \varvec{p}_{\text{med}}\) are used to ‘pull’ the particle to escape local optimum because some error information of \({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\) may accelerate premature convergence. \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) carry all personal-best information and can guide particles to a better direction. The experimental coefficient of 1/2 makes the cognitive guiding position suitable for the improved cognition term.
The improved cognition term \({\mathbf{a}}_{\text{cog}}\) will make full use of all personal-best fitnesses.
Step 8 Modified velocity update equation.
Experiments and results
Test benchmark functions
Twenty benchmark problems
Function name | Dimensions | Search range | Global optimum |
---|---|---|---|
f _{1}(x) | 20/30/50 | [−100, 100]^{D} | 0 |
f _{2}(x) | 20/30/50 | [−10, 10]^{D} | 0 |
f _{3}(x) | 20/30/50 | [−100, 100]^{D} | 0 |
f _{4}(x) | 20/30/50 | [−100, 100]^{D} | 0 |
f _{5}(x) | 20/30/50 | [−100, 100]^{D} | 0 |
f _{6}(x) | 20/30/50 | [−1.28, 1.28]^{D} | 0 |
f _{7}(x) | 20/30/50 | [−5.12, 5.12]^{D} | 0 |
f _{8}(x) | 20/30/50 | [−5.12, 5.12]^{D} | 0 |
f _{9}(x) | 20/30/50 | [−32, 32]^{D} | 0 |
f _{10}(x) | 20/30/50 | [−600, 600]^{D} | 0 |
f _{11}(x) | 20/30/50 | [−0.5, 0.5]^{D} | 0 |
f _{12}(x) | 20/30/50 | [−50, 50]^{D} | 0 |
f _{13}(x) | 20/30/50 | [−1, 1]^{D} | 0 |
f _{14}(x) | 20/30/50 | [−5.12, 5.12]^{D} | 0 |
f _{15}(x) | 20/30/50 | [−100, 100]^{D} | 0 |
f _{16}(x) | 20/30/50 | [−100, 100]^{D} | 0 |
f _{17}(x) | 20/30/50 | [−1.28, 1.28]^{D} | 0 |
f _{18}(x) | 20/30/50 | [−100, 100]^{D} | −450 |
f _{19}(x) | 20/30/50 | [−32, 32]^{D} | −140 |
f _{20}(x) | 20/30/50 | [−0.5, 0.5]^{D} | 90 |
- 1.
Sphere Function (unimodal function)
$$f_{1} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} }$$ - 2.
Schewefel’s Problem 2.22 (unimodal function)
$$f_{2} (x) = \sum\limits_{i = 1}^{n} {\left| {x_{i} } \right| + \mathop \prod \limits_{i = 1}^{n} \left| {x_{i} } \right|}$$ - 3.
Schewefel’s Problem 1.2 (unimodal function)
$$f_{3} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{j = 1}^{i} {x_{j} } } \right)^{2} }$$ - 4.
Schewefel’s Problem 2.21 (unimodal function)
$$f_{4} (x) = \mathop {\hbox{max} }\limits_{i} \{ \left. {\left| {x_{i} } \right|,1 \le i \le n} \right\}$$ - 5.
Step Function (unimodal function)
$$f_{5} (x) = \sum\limits_{i = 1}^{n} {\left( {\left| {x_{i} + 0.5} \right|} \right)}^{2}$$ - 6.
Quartic Function, i.e. Noise (unimodal function)
$$f_{6} (x) = \sum\limits_{i = 1}^{n} {ix_{i}^{4} } + random[0,1)$$ - 7.
Generalized Rastrigin’s Function (multimodal function)
$$f_{7} (x) = \sum\limits_{i = 1}^{n} {\left[ {x_{i}^{2} - 10\cos (2\pi x_{i} ) + 10} \right]}$$ - 8.
Non-continuous Rastrigin’s Function (multimodal function)
$$\begin{aligned} f_{8} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2} - 10\cos (2\pi y_{i} ) + 10} \right]} \hfill \\ {\text{where}}\quad {\kern 1pt} y_{i} = \left\{ {\begin{array}{*{20}c} {x_{i} } \\ {\frac{{round(2x_{i} )}}{2}} \\ \end{array} } \right.\quad {\kern 1pt} \begin{array}{*{20}c} {\left| {x_{i} } \right| \le 0.5} \\ {\left| {x_{i} } \right| \ge 0.5} \\ \end{array} \hfill \\ \end{aligned}$$ - 9.
Ackley’s Function (multimodal function)
$$f_{9} (x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i}^{2} } } } \right) - \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi x_{i} } } \right) + 20 + e$$ - 10.
Generalized Griewank Function (multimodal function)
$$f_{10} (x) = \frac{1}{4000}\sum\limits_{i = 1}^{n} {x_{i}^{2} } - \prod\limits_{i = 1}^{n} {\cos \left(\frac{{x_{i} }}{\sqrt i }\right)} + 1$$ - 11.
Weierstrass Function (multimodal function)
$$\begin{aligned} & f_{11} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {x_{i} + 0.5} \right)} \right)} \right]} } \right) - n} \sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20 \hfill \\ \end{aligned}$$ - 12.
Generalized Penalized Function (multimodal function)
$$\begin{aligned}& f_{12} (x) = \tfrac{\pi }{n}\left\{ {10\sin \left( {\pi y_{1} } \right) + \sum\limits_{i = 1}^{n - 1} {\left( {y_{i} - 1} \right)^{2} \left[ {1 + 10\sin^{2} \left( {\pi y_{i + 1} } \right)} \right] + \left( {y_{n} - 1} \right)^{2} } } \right\} + \sum\limits_{i = 1}^{n} u (x_{i} ,10,100,4) \hfill \\&\quad y_{i} = 1 + \frac{{x_{i} + 1}}{4},u(x_{i} ,a,k,m) = \left\{ {\begin{array}{*{20}l} {k\left( {x_{i} - a} \right)^{m} } \\ 0 \\ {k\left( { - x_{i} - a} \right)^{m} } \\ \end{array} } \right.\begin{array}{*{20}c} {x_{i} > a} \\ { - a \le x_{i} \le a} \\ {x_{i} < a} \\ \end{array} \hfill \\ \end{aligned}$$ - 13.
Cosine mixture Problem (multimodal function)
$$f_{13} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} } - 0.1\sum\limits_{i = 1}^{n} {\cos \left( {5\pi x_{i} } \right)}$$ - 14.
Rotated Rastrign Function (multimodal function)
$$f_{14} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2} - 10\cos (2\pi y_{i} ) + 10} \right], \quad y = M \times x}$$ - 15.
Rotated Salomon Function (multimodal function)
$$f_{15} (x) = 1 - \cos \left( {2\pi \sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } } } \right) + 0.1\sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } }, \quad y = M \times x$$ - 16.
Rotated Rosenbrock Function (multimodal function)
$$f_{16} (x) = \sum\limits_{i = 1}^{n - 1} {\left[ {100\left( {y_{i}^{2} - y_{i + 1} } \right)^{2} + \left( {y_{i} - 1} \right)^{2} } \right], \quad y = M \times x}$$ - 17.
Rotated Elliptic Function (unimodal function)
$$f_{17} (x) = \sum\limits_{i = 1}^{n} {\left( {10^{6} } \right)^{{{{\left( {i - 1} \right)} \mathord{\left/ {\vphantom {{\left( {i - 1} \right)} {\left( {n - 1} \right)}}} \right. \kern-0pt} {\left( {n - 1} \right)}}}} y_{i}^{2}, \quad y = M \times x}$$ - 18.
Shifted Schewefel’s Problem 2.21 (unimodal function)
$$\begin{aligned} &f_{18} (x) = \mathop {\hbox{max} }\limits_{i} \left\{ {\left| {y_{i} } \right|,1 \le i \le n} \right\} + fbias_{18}, \quad y = x - o \hfill \\&\quad {\text{where}}\quad {\kern 1pt} fbias_{18} = - 450. \hfill \\ \end{aligned}$$ - 19.
Shifted Rotated Ackley’s Function (multimodal function)
$$\begin{aligned} & f_{19} (x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {z_{i}^{2} } } } \right) - \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi z_{i} } } \right) + 20 + e + fbias_{19} \hfill \\& \quad{\text{where}}\quad {\kern 1pt} fbias_{19} = - 140, \quad z = \left( {x - o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$ - 20.
Shifted Rotated Weierstrass Function (multimodal function)
$$\begin{aligned} &f_{20} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {z_{i} + 0.5} \right)} \right)} \right]} } \right) - n} \sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} + fbias_{20} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20,\quad fbias_{20} = 90,\quad z = \left( {x - o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$
Experimental analysis
Validity of the proposed strategy
To validate the proposed strategy, PSO-API and LPSO-API algorithms are implemented on matlab 2011a to compare with PSO and LPSO algorithms. All twenty benchmarks are tested in the experiments. Parameter settings of the four algorithms are as follows: The size of the population is 30. c _{1} and c _{2} are both equal to 2 in PSO and LPSO algorithms, and c is equal to 2 in PSO-API and LPSO-API algorithms. ω is equal to 0.7 in PSO and PSO-API algorithms and uses the suggested linearly decreasing version of section “Linearly decreasing inertia weight” in LPSO and LPSO-API algorithms (Shi and Eberhart 1999). 20, 30 and 50 dimensions are adopted in our experiments and the generations are 5000. Also, 20 independent trials are implemented on these problems. Tables 7, 9 and 11 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of average best fitness(AVE), rank(Rank) of average best fitness, median best fitness (MED), standard deviation (SD), average rank (AR) and final rank (FR) of average best fitness.
Wilcoxon’s rank sum test is commonly used to analyze whether two data sets are statistically different from each other, and \(p{\text{ value}}\)(p), \({\text{h-value}}\)(h) and \({\text{zval}}\)(z) are acquired in Wilcoxon’s rank sum test. In this test, significance level needs to be set and a value of 0.05 significance level indicates that something occurs more than the probability of 95 %. In Wilcoxon’s rank sum test, \({\text{h-value}}\) only has three value, 1, 0, −1, which indicate that the proposed algorithm have a significantly better, same and worse performance than the compared algorithm, respectively (Beheshti et al. 2013). Tables 8, 10 and 12 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of Wilcoxon’s rank sum test. In details, the last three rows of Tables 8, 10 and 12 list the numbers of 1, 0 or −1 that \({\text{h-value}}\) equals. Note that the best results for each benchmark function are marked in bold in Tables 7–12.
For three different dimensions, final rank obtained by LPSO-API algorithm all takes the first place and that obtained by PSO-API algorithm are all the second. The final rank can reflect the comprehensive performance of algorithm on a suite of benchmark problems. From the rank, it is clearly seen that LPSO-API algorithm and PSO-API algorithm shows the superiority than LPSO algorithm and PSO algorithm in high-quality solutions.
From the data in Tables 8, 10 and 12, the number of \({\text{h-value = 1}}\) is 16, 17 and 17 for PSO-API algorithm and 13, 14 and 17 for LPSO-API algorithm on 20, 30 and 50 dimensions’ problems. A few \({\text{h-value = }} - 1\) and \({\text{h-value = }}0\) exist. It means that the results of LPSO-API and PSO-API algorithm statistically significantly outperform that of the PSO and LPSO algorithm. Also, by comparing the number of \({\text{h-value = 1}}\) with 20, 30, 50 dimensions’ problems, we can seen that the higher the dimension, the larger the number of \({\text{h-value = 1}}\) of LPSO-API algorithm and PSO-API algorithm. It illustrates that the LPSO-API algorithm and PSO-API algorithm perform better on high-dimension problem than low-dimension problem to some degree. From the above analysis, the proposed strategy of using all personal-best information is valid and efficient for solving most optimization problems, especially in high dimensions.
Comparison experiments with other PSO variants
Parameters settings of PSO variants
PSO variant | Topology | Parameters settings |
---|---|---|
PSO-cf | Local ring | ω:0.9 − 0.4, c _{1} = c _{2} = 2.0 |
FIPS | Local ring | χ = 0.729, ∑ c _{ i } = 4.1 |
HPSO-TVAC | Global star | ω:0.9 − 0.4, c _{1}:2.5 − 0.5, c _{2}:0.5 − 2.5 |
DMS-PSO | Dynamic multi-swarm | ω:0.9 − 0.4, m = 3, R = 5 |
VPSO | Local von neumann | ω:0.9 − 0.4, c _{1} = c _{2} = 2.0 |
CLPSO | Comprehensive learning | ω:0.9 − 0.4, C = 1.49455, m = 7 |
APSO | Global star | \(\omega :0.9,c_{1} = c_{2} = 2.0,\delta :{\text{random in [0}} . 0 5 { 0} . 1 ] , { }\sigma : 1 { - 0} . 1\) |
Numerical results for the comparisons
Name | PSO-cf | FIPS | HPSO-TVAC | DMS-PSO | VPSO | CLPSO | APSO | PSO-API | |
---|---|---|---|---|---|---|---|---|---|
\(\, f_{1} (x)\) | Best | 4.77e−29 | 3.21e−30 | 3.38e−41 | 3.85e−54 | 5.11e−38 | 1.89e−19 | 1.45e−150 | 0.00 |
SD | 1.13e−28 | 1.91e−30 | 8.50e−41 | 1.75e−53 | 1.91e−37 | 1.49e−19 | 5.73e−150 | 0.00 | |
Rank | 7 | 6 | 4 | 3 | 5 | 8 | 2 | 1 | |
\(\, f_{2} (x)\) | Best | 2.03e−20 | 1.32e−17 | 6.9e−23 | 2.61e−29 | 6.29e−27 | 1.01e−13 | 5.15e−84 | 3.95e−323 |
SD | 2.89e−20 | 7.86e−18 | 6.89e−23 | 6.6e−29 | 8.68e−27 | 6.51e−14 | 1.44e−83 | 5.13e−322 | |
Rank | 6 | 7 | 5 | 3 | 4 | 8 | 2 | 1 | |
\(\, f_{3} (x)\) | Best | 18.60 | 0.77 | 2.89e−7 | 47.5 | 1.44 | 395 | 1.0e−10 | 0.00 |
SD | 30.71 | 0.86 | 2.97e−7 | 56.4 | 1.55 | 142 | 2.13e−10 | 0.00 | |
Rank | 6 | 4 | 3 | 7 | 5 | 8 | 2 | 1 | |
\(\, f_{5} (x)\) | Best | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
SD | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
Rank | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
\(\, f_{6} (x)\) | Best | 1.49e−2 | 2.55e−3 | 5.54e−2 | 1.1e−2 | 1.08e−2 | 3.92e−3 | 4.66e−3 | 5.88e−001 |
SD | 5.66e−3 | 6.25e−4 | 2.08e−2 | 3.94e−3 | 3.24e−3 | 1.14e−3 | 1.7e−3 | 2.73e−001 | |
Rank | 6 | 1 | 7 | 5 | 4 | 2 | 3 | 8 | |
\(\, f_{7} (x)\) | Best | 34.90 | 29.98 | 2.39 | 28.1 | 34.09 | 2.57e−11 | 5.8e−15 | 0.00 |
SD | 7.25 | 10.92 | 3.71 | 6.42 | 8.07 | 6.64e–11 | 1.01e−14 | 0.00 | |
Rank | 8 | 6 | 4 | 5 | 7 | 3 | 2 | 1 | |
\(\, f_{8} (x)\) | Best | 30.40 | 21.33 | 35.91 | 1.83 | 32.8 | 0.167 | 4.14e−16 | 0.00 |
SD | 9.23 | 9.46 | 9.49 | 2.65 | 6.49 | 0.397 | 1.45e−15 | 0.00 | |
Rank | 6 | 5 | 8 | 4 | 7 | 3 | 2 | 1 | |
\(\, f_{9} (x)\) | Best | 1.85e−14 | 7.69e−15 | 2.06e−10 | 8.52e−15 | 1.14e−14 | 2.01e−12 | 1.11e−14 | 3.55e−015 |
SD | 4.80e−15 | 9.33e−16 | 9.45e−10 | 1.79e−15 | 3.48e−15 | 9.22e−13 | 3.55e−15 | 0.00e+000 | |
Rank | 6 | 2 | 8 | 3 | 5 | 7 | 4 | 1 | |
\(\, f_{10} (x)\) | Best | 1.10e−2 | 9.04e−4 | 1.07e−2 | 1.31e−2 | 1.31e−2 | 6.45e−13 | 1.67e−2 | 0.00 |
SD | 1.60e−2 | 2.78e−3 | 1.14e−2 | 1.73e−2 | 1.35e−2 | 2.07e−12 | 2.41e−2 | 0.00 | |
Rank | 5 | 3 | 4 | 7 | 6 | 2 | 8 | 1 | |
\(\, f_{12} (x)\) | Best | 2.18e−30 | 1.22e−31 | 7.07e−30 | 2.05e−32 | 3.46e−3 | 1.59e−21 | 3.76e−31 | 9.72e−002 |
SD | 5.14e−30 | 4.85e−32 | 4.05e−30 | 8.12e−33 | 1.89e−2 | 1.93e−21 | 1.2e−30 | 1.97e−002 | |
Rank | 4 | 2 | 5 | 1 | 7 | 6 | 3 | 8 | |
AR | 5.5 | 3.7 | 4.9 | 3.9 | 5.1 | 4.8 | 2.9 | 2.4 | |
FR | 8 | 3 | 6 | 4 | 7 | 5 | 2 | 1 |
From Table 3, the data of Rank demonstrates that PSO-API algorithm obtains best results on f _{1}(x), f _{2}(x), f _{3}(x), f _{5}(x), f _{7}(x), f _{8}(x), f _{9}(x), f _{10}(x) and performs worst on f _{6}(x) and f _{12}(x). Table 3 also shows FR obtained by PSO-API algorithms is better than that obtained by other eight PSO variants. It can be concluded that PSO-API algorithm has the highest comprehensive performance among them. Consequently, the comparisons indicate that PSO-API algorithm has the best overall performance over several existing PSO variants and is an effective method for solving a variety of optimization problems.
Computational time of six PSO algorithms
Function | PSO-cf | FIPS | DMS-PSO | CLPSO | LPSO | PSO-API | |
---|---|---|---|---|---|---|---|
\(\, f_{1} (x)\) | AV(CPU)/Rank | 6.09e−001/1 | 4.19e+000/6 | 4.04e+000/5 | 3.60e+000/4 | 7.59e−001/2 | 1.28e+000/3 |
\(\, f_{2} (x)\) | AV(CPU)/Rank | 1.76e+000/3 | 4.10e+000/5 | 4.85e+000/6 | 3.55e+000/4 | 9.99e−001/1 | 1.51e+000/2 |
\(\, f_{3} (x)\) | AV(CPU)/Rank | 1.01e+001/2 | 1.16e+001/3 | 1.28e+001/4 | 9.95e+000/1 | 1.44e+001/5 | 1.61e+001/6 |
\(\, f_{4} (x)\) | AV(CPU)/Rank | 3.06e+000/3 | 4.19e+000/5 | 5.50e+000/6 | 3.67e+000/4 | 1.13e+000/1 | 1.66e+000/2 |
\(\, f_{5} (x)\) | AV(CPU)/Rank | 3.31e+000/3 | 4.12e+000/6 | 4.11e+000/5 | 3.78e+000/4 | 8.22e−001/1 | 1.27e+000/2 |
\(\, f_{6} (x)\) | AV(CPU)/Rank | 5.78e+000/1 | 7.04e+000/4 | 8.51e+000/6 | 6.79e+000/3 | 6.51e+000/2 | 7.35e+000/5 |
\(\, f_{7} (x)\) | AV(CPU)/Rank | 3.17e+000/3 | 4.49e+000/5 | 4.56e+000/6 | 3.98e+000/4 | 1.03e+000/1 | 1.46e+000/2 |
\(\, f_{8} (x)\) | AV(CPU)/Rank | 5.05e+000/2 | 6.59e+000/5 | 6.96e+000/6 | 5.89e+000/4 | 4.72e+000/1 | 5.31e+000/3 |
\(\, f_{9} (x)\) | AV(CPU)/Rank | 4.83e+000/3 | 6.31e+000/5 | 7.23e+000/6 | 5.66e+000/4 | 3.18e+000/1 | 3.89e+000/2 |
\(\, f_{10} (x)\) | AV(CPU)/Rank | 4.21e+000/1 | 6.53e+000/5 | 6.83e+000/6 | 6.16e+000/4 | 4.74e+000/2 | 5.20e+000/3 |
\(\, f_{11} (x)\) | AV(CPU)/Rank | 5.09e+001/2 | 5.17e+001/3 | 6.52e+001/4 | 4.78e+001/1 | 9.52e+001/5 | 9.80e+001/6 |
\(\, f_{12} (x)\) | AV(CPU)/Rank | 6.19e+000/1 | 1.22e+001/3 | 1.25e+001/4 | 1.12e+001/2 | 1.80e+001/6 | 1.75e+001/5 |
\(\, f_{13} (x)\) | AV(CPU)/Rank | 4.52e−002/1 | 3.98e+000/6 | 3.91e+000/5 | 3.51e+000/4 | 1.07e+000/2 | 1.46e+000/3 |
\(\, f_{14} (x)\) | AV(CPU)/Rank | 3.66e+000/3 | 4.78e+000/5 | 4.97e+000/6 | 4.24e+000/4 | 2.51e+000/1 | 2.94e+000/2 |
\(\, f_{15} (x)\) | AV(CPU)/Rank | 3.81e+000/3 | 4.90e+000/5 | 5.00e+000/6 | 4.47e+000/4 | 2.72e+000/1 | 3.15e+000/2 |
\(\, f_{16} (x)\) | AV(CPU)/Rank | 4.49e+000/2 | 5.57e+000/5 | 6.62e+000/6 | 5.10e+000/4 | 4.14e+000/1 | 4.51e+000/3 |
\(\, f_{17} (x)\) | AV(CPU)/Rank | 4.80e+000/3 | 5.90e+000/5 | 1.02e+001/6 | 4.67e+000/1 | 4.79e+000/2 | 5.35e+000/4 |
\(\, f_{18} (x)\) | AV(CPU)/Rank | 3.12e−003/1 | 4.60e+000/5 | 5.70e+000/6 | 3.67e+000/4 | 2.19e+000/2 | 2.58e+000/3 |
\(\, f_{19} (x)\) | AV(CPU)/Rank | 1.56e−003/1 | 5.84e+000/5 | 6.25e+000/6 | 4.35e+000/3 | 4.13e+000/2 | 4.57e+000/4 |
\(\, f_{20} (x)\) | AV(CPU)/Rank | 2.65e+001/2 | 2.76e+001/3 | 3.25e+001/4 | 2.00e+001/1 | 4.99e+001/6 | 4.95e+001/5 |
AR | 2.05 | 4.7 | 5.45 | 3.2 | 2.25 | 3.35 | |
FR | 1 | 5 | 6 | 3 | 2 | 4 |
From Rank of LPSO and PSO-API algorithm, we can conclude that our proposed policy adding to the original PSO increases the computational time. In Table 4, AR reflects comprehensive time-consuming order of the algorithm for twenty benchmarks. From Table 4, the value of AR for PSO-cf and LPSO are smallest among all six algorithms. It illustrates that PSO-cf and LPSO, which are better than our proposed algorithm, have the best CPU time. The value of ‘AR’ for PSO-API algorithm and CLPSO are highly close to each other and it demonstrates that they have similar overall time consumption. The value of ‘AR’ for PSO-cf and LPSO are ‘4.7’ and ‘5.45’, which are both worse than PSO-API algorithm. From the value of ‘FR’, although PSO-API algorithm only ranks four, it is worthy of spending time to improve the accuracy of PSO algorithm. It is clear from the above comparisons of the accuracy and time consumption that PSO-API algorithm has a good overall balance between the performance and time complexity.
Comparisons experiments with similar PSO algorithms
Comparison results with FSS algorithm
Name | PSO-API | FSS | |
---|---|---|---|
\(\, f_{3} (x)\) | Avg. best fitness/SD | 3.883e−090/9.563e−090 | 8.080e−002/2.200e−002 |
\(\, f_{7} (x)\) | Avg. best fitness/SD | 0.000e+000/0.000e+000 | 1.338e+001/4.005e+000 |
\(\, f_{9} (x)\) | Avg. best fitness/SD | 3.789e−015/9.013e−016 | 4.000e−002/2.000e−002 |
\(\, f_{10} (x)\) | Avg. best fitness/SD | 0.000e+000/0.000e+000 | 2.700e−003/2.000e−003 |
\(\, f_{21} (x)\) | Avg. best fitness/SD | 2.635e+001/3.145e−001 | 1.611e+001/7.290e−001 |
Comparison results with CenterPSO algorithm
Name | Size | PSO-API | CenterPSO | |
---|---|---|---|---|
\(\, f_{7} (x)\) | 20 | Avg. best fitness/SD | 0.000e+000/0.000e+000 | 3.359e+001/9.562e+000 |
40 | Avg. best fitness/SD | 0.000e+000/0.000e+000 | 2.668e+001/7.764e+000 | |
80 | Avg. best fitness/SD | 0.000e+000/0.000e+000 | 2.276e+001/6.758e+000 | |
160 | Avg. best fitness/SD | 2.020e−010/2.020e−009 | 2.141e+001/5.949e+000 | |
\(\, f_{10} (x)\) | 20 | Avg. best fitness/SD | 2.311e−004/1.711e−003 | 1.200e−002/1.650e−002 |
40 | Avg. best fitness/SD | 7.841e−005/7.841e−004 | 8.800e−003/1.190e−002 | |
80 | Avg. best fitness/SD | 8.442e−006/8.442e−005 | 9.300e−003/1.200e−002 | |
160 | Avg. best fitness/SD | 7.308e−015/7.151e−014 | 1.200e−002/1.680e−002 | |
\(\, f_{21} (x)\) | 20 | Avg. best fitness/SD | 2.702e+001/3.831e−001 | 1.319e+002/1.358e+002 |
40 | Avg. best fitness/SD | 2.649e+001/3.276e−001 | 8.717e+001/6.365e+001 | |
80 | Avg. best fitness/SD | 2.626e+001/1.987e−001 | 6.234e+001/5.940e+001 | |
160 | Avg. best fitness/SD | 2.601e+001/2.379e−001 | 4.299e+001/4.499e+001 |
From the data in Table 5, it can be seen that PSO-API obtains better average best fitness and standard deviation than that obtained by FSS algorithm for all five benchmarks except Generalized Rosenbrock Function. However, for Generalized Rosenbrock Function, PSO-API and FSS algorithm obtain the results with the same order of magnitude. From the Table 6, the results obtained by PSO-API with all population sizes are better than that obtained by CenterPSO algorithm for all three benchmarks. Therefore, statistics analysis indicates the proposed algorithm have better performance than FSS algorithm and CenterPSO algorithm. For most of the benchmarks, all above experiments indicates that PSO-API is a high-performance algorithm.
Conclusions
In this work, to make full use of multi-information characteristics of all personal-best information, an improved PSO algorithm using three positions with all personal-best information has been adopted to enhance the performance. In proposed algorithm, an improved cognition term using the personal-best position, the centroid position and the median position is introduced in velocity update process of PSO. To validate this strategy, a set of benchmark functions including unimodal, multimodal, rotated and shifted benchmark functions with 20, 30 and 50 dimensions have been optimized. Experimental results show that the strategy using multi-information characteristics of all personal-best information is a valid strategy for the purposes of improving the PSO’s performance. Moreover, PSO-API algorithm has also been used to compare with several PSO variants and some similar algorithms of the proposed algorithm. Numerical results show that the PSO-API algorithm has higher precision and satisfied performance. To sum up, the proposed strategy enhances the search ability of PSO and PSO-API algorithm is an efficient PSO variant to obtain promising solution for most of benchmark functions.
Declarations
Authors’ contributions
S.H. carried out the study, collected data, designed the experiments, implemented the simulation, analyzed data and wrote the main manuscript. N.T. provided some intellectual information and revised the manuscript. Y.W. gave technical support and helped to the design of the study. Z.J. made the general supervision of the research. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests in this study.
Funding
In this work, the design of the study and collection, analysis, and interpretation of data are funded by the National High-tech Research and Development Projects of China under Grant No: 2014AA041505 and the writing of the manuscript is funded by the National Natural Science Foundation of China under Grant No: 61572238 and by the Provincial Outstanding Youth Foundation of Jiangsu Province under Grant No: BK20160001.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Beheshti Z, Shamsuddin SMH, Hasan S (2013) MPSO: median-oriented particle swarm optimization. Appl Math Comput 219(11):5817–5836MathSciNetView ArticleMATHGoogle Scholar
- Beielstein T, Parsopoulos KE, Vrahatis MN (2002) Tuning PSO parameters through sensitivity analysis. Universität DortmundGoogle Scholar
- Bonyadi MR, Li X, Michalewicz Z (2014) A hybrid particle swarm with a time-adaptive topology for constrained optimization. Swarm Evol Comput 18:22–37View ArticleGoogle Scholar
- Carmelo Filho JA, De Lima Neto FB, Lins AJCC et al (2008) A novel search algorithm based on fish school behavior. In: Proceedings of the 2008 IEEE international conference on systems, man and cybernetics, pp 2646–2651Google Scholar
- Cheng R, Jin Y (2015) A social learning particle swarm optimization algorithm for scalable optimization. Inf Sci 291:43–60MathSciNetView ArticleGoogle Scholar
- Deep K, Thakur M (2007) A new crossover operator for real coded genetic algorithms. Appl Math Comput 188(1):895–911MathSciNetView ArticleMATHGoogle Scholar
- Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, vol 1, pp 39–43Google Scholar
- Haklı H, Uğuz H (2014) A novel particle swarm optimization algorithm with Levy flight. Appl Soft Comput 23:333–345View ArticleGoogle Scholar
- Hu W, Yen GG (2015) Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Trans Evol Comput 19(1):1–18View ArticleGoogle Scholar
- Kennedy ER (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Perth, Australia, Piscat-away, vol 4, 1942–1948Google Scholar
- Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of the 2002 congress on evolutionary computation, vol 2, pp 1671–1676Google Scholar
- Kennedy J, Mendes R (2006) Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Trans Syst Man Cybern C Appl Rev 36(4):515–519View ArticleGoogle Scholar
- Li Y, Zhan ZH, Lin S et al (2015) Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems. Inf Sci 293:370–382ADSView ArticleGoogle Scholar
- Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer. In: Proceedings of the 2005 congress on swarm intelligence symposium, vol 8237, pp 124–129Google Scholar
- Liang JJ, Qin AK, Suganthan PN et al (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295View ArticleGoogle Scholar
- Lim WH, Isa NAM (2014a) An adaptive two-layer particle swarm optimization with elitist learning strategy. Inf Sci 273:49–72ADSMathSciNetView ArticleGoogle Scholar
- Lim WH, Isa NAM (2014b) Particle swarm optimization with adaptive time-varying topology connectivity. Appl Soft Comput 24:623–642View ArticleGoogle Scholar
- Lim WH, Isa NAM (2014c) Particle swarm optimization with increasing topology connectivity. Eng Appl Artif Intell 27:80–102View ArticleGoogle Scholar
- Lim WH, Isa NAM (2014d) Bidirectional teaching and peer-learning particle swarm optimization. Inf Sci 280:111–134ADSView ArticleGoogle Scholar
- Lim WH, Isa NAM (2014e) Teaching and peer-learning particle swarm optimization. Appl Soft Comput 18:39–58View ArticleGoogle Scholar
- Liu Y, Qin Z, Shi Z et al (2007) Center particle swarm optimization. Neurocomputing 70(4):672–679View ArticleGoogle Scholar
- Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210View ArticleGoogle Scholar
- Qin Q, Cheng S, Zhang Q et al (2015) Multiple strategies based orthogonal design particle swarm optimizer for numerical optimization. Comput Oper Res 60:91–110MathSciNetView ArticleGoogle Scholar
- Rao RV, Patel V (2013) An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Scientia Iranica 20(3):710–720MathSciNetGoogle Scholar
- Ratnaweera A, Halgamuge SK, Watson HC (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255View ArticleGoogle Scholar
- Shi Y, Eberhart RC (1998) Parameter selection in particle swarm optimization. Evolutionary programming VII, vol 1447. Springer, Berlin, pp 591–600Google Scholar
- Shi Y, Eberhart R (1998b) A modified particle swarm optimizer. In: Proceedings of the 1998 IEEE international conference on evolutionary computation, vol 6, pp 69–73Google Scholar
- Shi Y, Eberhart R (1998c) A modified particle swarm optimizer. In: IEEE international conference on evolutionary computation, the 1998 IEEE international conference on computational intelligence, pp 69–73Google Scholar
- Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. Proc IEEE Congr Evol Comput 3:1945–1950Google Scholar
- Suganthan PN, Hansen N, Liang JJ et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. In: Proceedings of IEEE congress on evolutionary computation, pp 1–50Google Scholar
- Sun S, Li J (2014) A two-swarm cooperative particle swarms optimization. Swarm Evol Comput 15:1–18View ArticleGoogle Scholar
- Wang L, Yang B, Chen Y (2014) Improving particle swarm optimization using multi-layer searching strategy. Inf Sci 274:70–94View ArticleGoogle Scholar
- Yadav A, Deep K (2014) An efficient co-swarm particle swarm optimization for non-linear constrained optimization. J Comput Sci 5(2):258–268View ArticleGoogle Scholar
- Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102View ArticleGoogle Scholar
- Zhan ZH, Zhang J, Li Y et al (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 39(6):1362–1381MathSciNetView ArticlePubMedGoogle Scholar
- Zhang W, Ma D, Wei J et al (2014) A parameter selection strategy for particle swarm optimization based on particle positions. Exp Syst Appl 41(7):3576–3584View ArticleGoogle Scholar