 Research
 Open Access
 Published:
Particle swarm optimization using multiinformation characteristics of all personalbest information
SpringerPlus volume 5, Article number: 1632 (2016)
Abstract
Convergence stagnation is the chief difficulty to solve hard optimization problems for most particle swarm optimization variants. To address this issue, a novel particle swarm optimization using multiinformation characteristics of all personalbest information is developed in our research. In the modified algorithm, two positions are defined by personalbest positions and an improved cognition term with three positions of all personalbest information is used in velocity update equation to enhance the search capability. This strategy could make particles fly to a better direction by discovering useful information from all the personalbest positions. The validity of the proposed algorithm is assessed on twenty benchmark problems including unimodal, multimodal, rotated and shifted functions, and the results are compared with that obtained by some published variants of particle swarm optimization in the literature. Computational results demonstrate that the proposed algorithm finds several global optimum and highquality solutions in most case with a fast convergence speed.
Background
Particle swarm optimization (PSO) is a bioinspired optimization algorithm introduced by Eberhart and Kennedy (1995), and is enlightened by the interaction and communication of bird flocking or fish schooling. PSO has attracted a great deal of attention as a treatment for highdimensional nonlinear optimization problem due to its better computational efficiency and simple implementation. With the development of intelligent manufacturing and complex system, many engineering problems are becoming increasingly complex to optimize, and thus timeconsuming computation and premature convergence often occurs in complicated optimization process. Therefore, many PSO variants with new techniques have been proposed to address the above problems.
Some researchers got insight into three control parameters, named after acceleration coefficients and inertia weight, to develop PSO variants (Beielstein et al. 2002; Zhang et al. 2014; Shi and Eberhart 1998a, b). In (Shi and Eberhart 1998b), linearly decreasing inertia weight particle swarm optimization (LPSO) was developed by modified inertia weight and the introduction of this dynamic inertia weight highly strengthened the performance of PSO algorithm. In recent research, multiple swarms or multiple layers strategies had already been proved to be an effective strategy to improve the performance of PSO (Sun and Li 2014; Yadav and Deep 2014; Lim and Isa 2014a; Wang et al. 2014). Sun and Li presented a cooperative particle swarm optimization (TCPSO) with twoswarm (the slave swarm and the master swarm) for optimization problem in large scale search space (Sun and Li 2014) and two subswarms using shrinking hypersphere PSO (SHPSO) and DE were also used in new coswarm PSO for constrained optimization problems (Yadav and Deep 2014). Multiple layers strategies, such as adaptive twolayer particle swarm optimization algorithm with elitist learning strategy (ATLPSOELS) (Lim and Isa 2014a) and multilayer particle swarm optimization (MLPSO) (Wang et al. 2014), were also used to solve complex problems. PSO with different topologies has different exploration/exploitation ability and performance (Bonyadi et al. 2014; Lim and Isa 2014b, c). Many new topology strategies [timeadaptive topology (Bonyadi et al. 2014), adaptive timevarying topology connectivity (Lim and Isa 2014b), increasing topology connectivity (Lim and Isa 2014c)] were also applied to PSO. Comparing with fullyconnected topology or regular topology, these topologies could lead to a different optimization process. In recent years, new techniques such as Levy flight (Haklı and Uğuz 2014), parallel cell coordinate system (Hu and Yen 2015), competitive and cooperative (Li et al. 2015) and orthogonal design (Qin et al. 2015) had also been adopted in PSO.
Many learning strategies are introduced to PSO to enhance the adaptability for complex optimization problems as learning behavior stemming from social animals plays a key role in animals’ adaptation to the changing environment (Cheng and Jin 2015; Rao and Patel 2013; Lim and Isa 2014d; e; Shi and Eberhart 1999). Cheng and Jin presented a modified particle swarm optimization using social learning mechanism (SLPSO) (Cheng and Jin 2015) and some concept of teachers, tutorial training and self motivated learning was proposed in teaching–learningbased PSO by Rao and Patel for performance enhancements (Rao and Patel 2013). Using teaching and peerlearning behaviors, a bidirectional teaching and peerlearning PSO (BTPLPSO) (Lim and Isa 2014d) and a two learning phases PSO (TPLPSO) (Lim and Isa 2014e) were proposed by Lim and Isa simultaneously.
Communication and learning behavior is a distinguishing feature among social animals and it improves social efficiency. Sharing information mechanism plays a key role in this behavior. To share personalbest information fairness, a particle swarm optimizer using several multiinformation characteristics of all personalbest information is developed in this paper. In the proposed PSO, two representative positions, which represent the features of all personalbest positions, are defined to acquiring the information of all personalbest positions. Then the cognition term in velocity update equation is formed by three positions. Due to the effect of all personalbest fitnesses, each particle can update its velocity and position by the distribution of personalbest fitnesses. This strategy could make full use of all personalbest information and correct some error guided directions of personalbest positions.
The structure of rest paper is as follows. Section “Particle swarm optimizer” presents the theory and formulation of PSO algorithm and linearly decreasing inertia weight. In section “Particle swarm optimization using all personalbest information”, the details of two representative positions are described and the proposed PSO using several multiinformation characteristics of all personalbest positions is provided. Numerical results and statistical analysis are shown in section “Experiments and results”. In section “Conclusions”, we conclude this paper.
Particle swarm optimizer
Velocity and position formulation
Particle swarm optimizer is inspired by fish’s and birds’ foraging behaviors, which are simplified as a swarm of particles by mimicking their key behaviors. As a swarm of n particles search in the feasible space, each particle’s position represents a potential solution for the optimization problem and the swarm can find highquality solution though the particles update their velocities and positions. Assuming the decision has m variables, the position and velocity of particle i are represented by mdimensional vector \(\varvec{x}_{i} = (x_{i1} ,x_{i2} , \ldots ,x_{{i{\text{m}}}} )\) and \(\varvec{v}_{i} = (v_{i1} ,v_{i2} , \ldots ,v_{{i{\text{m}}}} )\). Two positions, named personalbest position and globalbest position, are defined in PSO to update the velocities and guide the swarm. Personalbest position of particle i is denoted as \(\varvec{p}_{{{\text{best}},i}} = (p_{{{\text{best}},i1}} ,p_{{{\text{best}},i2}} , \ldots ,p_{{{\text{best}},i{\text{m}}}} )\) and globalbest position of particle i is denoted as \(\varvec{g}_{\text{best}} = (g_{{{\text{best}},1}} ,g_{{{\text{best}},2}} , \ldots ,g_{{{\text{best}},{\text{m}}}} )\). To sum up, the formulations of the velocity \(\varvec{v}_{i}^{t + 1}\) and the position \(\varvec{x}_{i}^{t + 1}\) of particle i can be expressed by the Eq. (1) and (2).
where c _{1}, c _{2} are cognitive factor and social factor. ω is inertia weight. r _{1}, r _{2} are two real numbers randomly in (0, 1). t is the current generation. According to the theory of PSO, the personal experience and global experience make the particle move closer to them to get a new promising position.
Linearly decreasing inertia weight
Appropriate selection of inertia weight can balance global exploration and local exploitation during the evolution process. Large ω can benefit the global search while small value can contribute to local exploitation. Linearly decreasing inertia weight adopted in PSO (LPSO) significantly improves the performance of PSO for solving various optimization problems and the inertia weight ω is advised by the Eq. (3):
where \({\text{T}}\) is the maximal generation. \({\upomega}_{\rm{min} }\) and \({\upomega}_{\rm max }\) are the upper limit and lower limit. Numerical experiments illustrated the impact of ω, and 0.9 (upper value) and 0.4 (lower value) are suggested (Shi and Eberhart 1999).
From the above description, LPSO pseudocode is shown in Algorithm 1.
Particle swarm optimization using all personalbest information
Analysis of personalbest information
Learning behavior is a special skill for social animals, which can share the information with their members. Cooperative behavior of a swarm is more efficient than one taking an action alone due to their fruitful information and communication. In PSO, each particle can provide its personalbest position information to guide its flying direction. The whole personalbest positions of the swarm imply the distribution of fruitful goodfitnessrelated information. To take full advantage of multiinformation characteristics of all personalbest information will contribute to ignoring several particles’ error information trapping in local optima. In the theory of PSO, personalbest position is only used for its own particle in evolutionary process, not reflecting the influence of fitness distribution in landscape. Misguided information of personalbest positions, which have no opportunities to be corrected, will make PSO premature. Therefore, two positions, which add the influence of personalbest fitness distribution, are defined to strengthen the particle’s ability to learn from other particles’ experience. Then cognition term with three defined personalbest positions in velocity update equation is formed to reduce the misguided opportunity. The details of the improved cognition term and the proposed PSO algorithm are as follows.
Detail of improved PSO algorithm
Step 1 Calculate all personalbest positions’ fitnesses, and then figure out the minimal fitness and the maximal fitness among these personalbest fitnesses. The way to find the minimal fitness and the maximal fitness is as follows:
where f _{min} and f _{max} stand for the minimal fitness and the maximal fitness of personalbest positions. f denotes the fitness function.
Step 2 Normalization method of personalbest fitness.
As the fitness value varies with a wide range in various optimization problems, a robust way to suitably reflect the influence of fitness is to normalize personalbest fitness. For minimization problem, the smaller the fitness value, the stronger the influence of personalbest position. According this feature, the normalization method is as Eq. (6).
where r _{ i } stands for the normalized value of the ith personalbest fitness.
Step 3 After normalized the personalbest fitness, we should also acquire the proportions of these fitnesses. The proportion of the ith personalbest fitness is denoted as θ _{ i }. Thus, for the normalized value of the ith personalbest fitness, θ _{ i } can be obtained as follows:
Step 4 Calculate centroid position \(\varvec{p}_{\text{centr}}\) of all personalbest positions:
Centroid position is defined as weighted sum form of \({\mathbf{p}}_{\text{best}}\) with \(\varvec{\theta}\) to reflect the influence of personalbest fitnesses. Similar to the relation of an object's density and mass in physics, by regarding personalbest fitness as ‘the density of an object’ and personalbest position as ‘the location in the object’, the position \(\varvec{p}_{\text{centr}}\) can be seen as ‘the centroid of the object’. The centroid of an object is important factor to reflect the distribution of mass and thus \(\varvec{p}_{\text{centr}}\) reflects the distribution of high quality fitness. The centroid position is always close to the area where most good fitnesses locate.
Step 5 Calculate median position \(\varvec{p}_{\text{med}}\) of all personalbest positions.
\(\varvec{p}_{\text{med}}\) represents the position of the median personalbest fitness. \(\varvec{p}_{\text{med}}\) also reflects the distribution of high quality fitness from another perspective. \(\varvec{p}_{\text{med}}\) is obtained without weighted sum form and can avoid the influence of some bad personalbest positions. Algorithm 2 is Pseudocode to find the median fitness \(\theta_{\text{med}}\) and the median position \(\varvec{p}_{\text{med}}\).
Step 6 Cognitive guiding position \({\mathbf{p}}_{\text{best}}^{\prime }\).
In the proposed PSO, cognitive guiding position using the above defined positions is calculated according to the following equations:
The cognitive guiding position includes three positions, the personalbest position \({\mathbf{p}}_{\text{best}}\), the centroid position \(\varvec{p}_{\text{centr}}\) and the median position \(\varvec{p}_{\text{med}}\). \({\mathbf{p}}_{\text{best}}\) and \(\varvec{p}_{\text{centr}}  \varvec{p}_{\text{med}}\) are used to ‘pull’ the particle to escape local optimum because some error information of \({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\) may accelerate premature convergence. \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) carry all personalbest information and can guide particles to a better direction. The experimental coefficient of 1/2 makes the cognitive guiding position suitable for the improved cognition term.
Step 7 Improved cognition term \({\mathbf{a}}_{\text{cog}}\).
The improved cognition term \({\mathbf{a}}_{\text{cog}}\) will make full use of all personalbest fitnesses.
Step 8 Modified velocity update equation.
In this step, the work is to replace original cognition term with the improved cognition term \({\mathbf{a}}_{\text{cog}}\) in velocity update equation of PSO and LPSO algorithm. Therefore, particle swarm optimizer using multiinformation characteristics of all personalbest information (PSOAPI) and Linearly decreasing inertia weight PSOAPI (LPSOAPI) can be obtained using this modified velocity update equation. Take LPSOAPI algorithm for example, each particle’s velocity updates as Eq. (10).
Not considering the influence of the current velocity and all the coefficients, there are four positions (\({\mathbf{p}}_{\text{best}}\),\(\varvec{g}_{\text{best}}\),\(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\)) to influence the velocity update in Eq. (10). In PSO, \(\Delta \varvec{v}^{\prime } = \varvec{g}_{\text{best}} + \varvec{p}_{{{\text{best}},i}}\) is introduced to show the influence of \(\varvec{g}_{\text{best}}\) and \(\varvec{p}_{{{\text{best}},i}}\). As is illustrated in Fig. 1b, if the current iteration \(\varvec{g}_{\text{best}}\) is local optimum, \(\Delta \varvec{v}^{\prime }\) will accelerate the particles fall into local optimum region. Comparing with PSO, \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) is added to velocity update equation in PSOAPI. In Fig. 1a, the white circle points represents the personalbest positions with worse fitnesses and the grey circle points represents the personalbest positions with better fitnesses. From the distribution of these above points, the location of \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\), which are calculated by Eq. (7) and Algorithm 2, are shown by the yellow circle points in Fig. 1. Defined by all personalbest positions and their fitnesses, \(\varvec{p}_{\text{centr}}\) is more close to the region which locates many personalbest positions with good fitnesses. Although the fitness around the real globalbest position is worse than that of local optimum \(\varvec{g}_{\text{best}}\), the personalbest positions are also prone to distribute in these positions with good fitnesses around real globalbest position, which is the black rhombic point in Fig. 1a. Regard \(\varvec{p}_{\text{centr}}\) as a reference point, and \(\Delta \varvec{v}^{\prime \prime } = \varvec{p}_{{{\text{best}},i}}  \varvec{p}_{\text{med}}\), which carrys all personalbest information, represents the influence of good fitness distribution. As is illustrated in Fig. 1c, α represents the direction adjusted by \(\Delta \varvec{v}^{\prime \prime }\) and \(\Delta \varvec{v}^{\prime \prime }\) makes particles adjust their directions to the real globalbest position. Constantly adjusted by α in the search process, the particles have a greater probability of flying to the real globalbest position. Besides, \(\Delta \varvec{v}^{\prime \prime } \) will be small value when an uniform fitness distribution occurs in the search process and \(\Delta \varvec{v}^{\prime \prime } \) makes little effect on particles. That is, PSOAPI only has \({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\) influence particles’ trajectory and PSOAPI has the same performance as PSO in that case. Therefore, three terms (\(\Delta \varvec{v}^{\prime \prime }\),\({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\)) contribute to adjusting the velocity and different ‘pull’ and ‘push’ influence make PSOAPI have a stable performance over a variety of problems. The flowchart of LPSOAPI algorithm is shown in Fig. 2.
Experiments and results
Test benchmark functions
In order to assess the performance of the proposed algorithm, twenty benchmark problems including unimodal, multimodal, rotated and shifted functions selected from the literature (Deep and Thakur 2007; Liang et al. 2006; Suganthan et al. 2005; Yao et al. 1999) are used to verify it. Note that all the problems are minimum problems and only one global optimum exists. The function name, dimensions, search range and global optimum value are listed in Table 1 and the formulations of these problems are listed as follows:

1.
Sphere Function (unimodal function)
$$f_{1} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} }$$ 
2.
Schewefel’s Problem 2.22 (unimodal function)
$$f_{2} (x) = \sum\limits_{i = 1}^{n} {\left {x_{i} } \right + \mathop \prod \limits_{i = 1}^{n} \left {x_{i} } \right}$$ 
3.
Schewefel’s Problem 1.2 (unimodal function)
$$f_{3} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{j = 1}^{i} {x_{j} } } \right)^{2} }$$ 
4.
Schewefel’s Problem 2.21 (unimodal function)
$$f_{4} (x) = \mathop {\hbox{max} }\limits_{i} \{ \left. {\left {x_{i} } \right,1 \le i \le n} \right\}$$ 
5.
Step Function (unimodal function)
$$f_{5} (x) = \sum\limits_{i = 1}^{n} {\left( {\left {x_{i} + 0.5} \right} \right)}^{2}$$ 
6.
Quartic Function, i.e. Noise (unimodal function)
$$f_{6} (x) = \sum\limits_{i = 1}^{n} {ix_{i}^{4} } + random[0,1)$$ 
7.
Generalized Rastrigin’s Function (multimodal function)
$$f_{7} (x) = \sum\limits_{i = 1}^{n} {\left[ {x_{i}^{2}  10\cos (2\pi x_{i} ) + 10} \right]}$$ 
8.
Noncontinuous Rastrigin’s Function (multimodal function)
$$\begin{aligned} f_{8} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2}  10\cos (2\pi y_{i} ) + 10} \right]} \hfill \\ {\text{where}}\quad {\kern 1pt} y_{i} = \left\{ {\begin{array}{*{20}c} {x_{i} } \\ {\frac{{round(2x_{i} )}}{2}} \\ \end{array} } \right.\quad {\kern 1pt} \begin{array}{*{20}c} {\left {x_{i} } \right \le 0.5} \\ {\left {x_{i} } \right \ge 0.5} \\ \end{array} \hfill \\ \end{aligned}$$ 
9.
Ackley’s Function (multimodal function)
$$f_{9} (x) =  20\exp \left( {  0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i}^{2} } } } \right)  \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi x_{i} } } \right) + 20 + e$$ 
10.
Generalized Griewank Function (multimodal function)
$$f_{10} (x) = \frac{1}{4000}\sum\limits_{i = 1}^{n} {x_{i}^{2} }  \prod\limits_{i = 1}^{n} {\cos \left(\frac{{x_{i} }}{\sqrt i }\right)} + 1$$ 
11.
Weierstrass Function (multimodal function)
$$\begin{aligned} & f_{11} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {x_{i} + 0.5} \right)} \right)} \right]} } \right)  n} \sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20 \hfill \\ \end{aligned}$$ 
12.
Generalized Penalized Function (multimodal function)
$$\begin{aligned}& f_{12} (x) = \tfrac{\pi }{n}\left\{ {10\sin \left( {\pi y_{1} } \right) + \sum\limits_{i = 1}^{n  1} {\left( {y_{i}  1} \right)^{2} \left[ {1 + 10\sin^{2} \left( {\pi y_{i + 1} } \right)} \right] + \left( {y_{n}  1} \right)^{2} } } \right\} + \sum\limits_{i = 1}^{n} u (x_{i} ,10,100,4) \hfill \\&\quad y_{i} = 1 + \frac{{x_{i} + 1}}{4},u(x_{i} ,a,k,m) = \left\{ {\begin{array}{*{20}l} {k\left( {x_{i}  a} \right)^{m} } \\ 0 \\ {k\left( {  x_{i}  a} \right)^{m} } \\ \end{array} } \right.\begin{array}{*{20}c} {x_{i} > a} \\ {  a \le x_{i} \le a} \\ {x_{i} < a} \\ \end{array} \hfill \\ \end{aligned}$$ 
13.
Cosine mixture Problem (multimodal function)
$$f_{13} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} }  0.1\sum\limits_{i = 1}^{n} {\cos \left( {5\pi x_{i} } \right)}$$ 
14.
Rotated Rastrign Function (multimodal function)
$$f_{14} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2}  10\cos (2\pi y_{i} ) + 10} \right], \quad y = M \times x}$$ 
15.
Rotated Salomon Function (multimodal function)
$$f_{15} (x) = 1  \cos \left( {2\pi \sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } } } \right) + 0.1\sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } }, \quad y = M \times x$$ 
16.
Rotated Rosenbrock Function (multimodal function)
$$f_{16} (x) = \sum\limits_{i = 1}^{n  1} {\left[ {100\left( {y_{i}^{2}  y_{i + 1} } \right)^{2} + \left( {y_{i}  1} \right)^{2} } \right], \quad y = M \times x}$$ 
17.
Rotated Elliptic Function (unimodal function)
$$f_{17} (x) = \sum\limits_{i = 1}^{n} {\left( {10^{6} } \right)^{{{{\left( {i  1} \right)} \mathord{\left/ {\vphantom {{\left( {i  1} \right)} {\left( {n  1} \right)}}} \right. \kern0pt} {\left( {n  1} \right)}}}} y_{i}^{2}, \quad y = M \times x}$$ 
18.
Shifted Schewefel’s Problem 2.21 (unimodal function)
$$\begin{aligned} &f_{18} (x) = \mathop {\hbox{max} }\limits_{i} \left\{ {\left {y_{i} } \right,1 \le i \le n} \right\} + fbias_{18}, \quad y = x  o \hfill \\&\quad {\text{where}}\quad {\kern 1pt} fbias_{18} =  450. \hfill \\ \end{aligned}$$ 
19.
Shifted Rotated Ackley’s Function (multimodal function)
$$\begin{aligned} & f_{19} (x) =  20\exp \left( {  0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {z_{i}^{2} } } } \right)  \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi z_{i} } } \right) + 20 + e + fbias_{19} \hfill \\& \quad{\text{where}}\quad {\kern 1pt} fbias_{19} =  140, \quad z = \left( {x  o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$ 
20.
Shifted Rotated Weierstrass Function (multimodal function)
$$\begin{aligned} &f_{20} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {z_{i} + 0.5} \right)} \right)} \right]} } \right)  n} \sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} + fbias_{20} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20,\quad fbias_{20} = 90,\quad z = \left( {x  o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$
Experimental analysis
Validity of the proposed strategy
To validate the proposed strategy, PSOAPI and LPSOAPI algorithms are implemented on matlab 2011a to compare with PSO and LPSO algorithms. All twenty benchmarks are tested in the experiments. Parameter settings of the four algorithms are as follows: The size of the population is 30. c _{1} and c _{2} are both equal to 2 in PSO and LPSO algorithms, and c is equal to 2 in PSOAPI and LPSOAPI algorithms. ω is equal to 0.7 in PSO and PSOAPI algorithms and uses the suggested linearly decreasing version of section “Linearly decreasing inertia weight” in LPSO and LPSOAPI algorithms (Shi and Eberhart 1999). 20, 30 and 50 dimensions are adopted in our experiments and the generations are 5000. Also, 20 independent trials are implemented on these problems. Tables 7, 9 and 11 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of average best fitness(AVE), rank(Rank) of average best fitness, median best fitness (MED), standard deviation (SD), average rank (AR) and final rank (FR) of average best fitness.
Wilcoxon’s rank sum test is commonly used to analyze whether two data sets are statistically different from each other, and \(p{\text{ value}}\)(p), \({\text{hvalue}}\)(h) and \({\text{zval}}\)(z) are acquired in Wilcoxon’s rank sum test. In this test, significance level needs to be set and a value of 0.05 significance level indicates that something occurs more than the probability of 95 %. In Wilcoxon’s rank sum test, \({\text{hvalue}}\) only has three value, 1, 0, −1, which indicate that the proposed algorithm have a significantly better, same and worse performance than the compared algorithm, respectively (Beheshti et al. 2013). Tables 8, 10 and 12 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of Wilcoxon’s rank sum test. In details, the last three rows of Tables 8, 10 and 12 list the numbers of 1, 0 or −1 that \({\text{hvalue}}\) equals. Note that the best results for each benchmark function are marked in bold in Tables 7–12.
From 20, 30 and 50 dimensions’ results in Tables 7, 9 and 11, it is clearly that LPSOAPI algorithm obtains the minimum value in terms of AVE on twelve, fourteen, and fifteen of twenty benchmark problems, respectively and PSOAPI algorithm obtains the minimum value of AVE on ten, ten and eight of twenty benchmark problems, respectively. It is obviously that LPSOAPI algorithm and PSOAPI algorithm obtains more minimum results than LPSO algorithm and PSO algorithm in terms of AVE on the suite of benchmark problems. It is worth pointing out that several global optimums are also obtained by LPSOAPI algorithm and PSOAPI algorithm. The numbers of best AVE obtained by four algorithms are described in Fig. 3.
For three different dimensions, final rank obtained by LPSOAPI algorithm all takes the first place and that obtained by PSOAPI algorithm are all the second. The final rank can reflect the comprehensive performance of algorithm on a suite of benchmark problems. From the rank, it is clearly seen that LPSOAPI algorithm and PSOAPI algorithm shows the superiority than LPSO algorithm and PSO algorithm in highquality solutions.
From the data in Tables 8, 10 and 12, the number of \({\text{hvalue = 1}}\) is 16, 17 and 17 for PSOAPI algorithm and 13, 14 and 17 for LPSOAPI algorithm on 20, 30 and 50 dimensions’ problems. A few \({\text{hvalue = }}  1\) and \({\text{hvalue = }}0\) exist. It means that the results of LPSOAPI and PSOAPI algorithm statistically significantly outperform that of the PSO and LPSO algorithm. Also, by comparing the number of \({\text{hvalue = 1}}\) with 20, 30, 50 dimensions’ problems, we can seen that the higher the dimension, the larger the number of \({\text{hvalue = 1}}\) of LPSOAPI algorithm and PSOAPI algorithm. It illustrates that the LPSOAPI algorithm and PSOAPI algorithm perform better on highdimension problem than lowdimension problem to some degree. From the above analysis, the proposed strategy of using all personalbest information is valid and efficient for solving most optimization problems, especially in high dimensions.
Six representative benchmark problems, two unimodal problems \(\, f_{1} (x)\) and f _{5}(x), two multimodal problems f _{7}(x) and \(f_{ 1 1} (x)\), a rotated problems \(f_{ 1 4} (x)\), a shifted problems \(f_{ 1 8} (x)\) are chosen for describing the process of fitness evolution. The evolutions of average fitness on these six problems are shown in Figs. 4a–f, 5a–f, 6a–f, respectively. Note that it is the logarithm of average fitness on vertical axis. It is clearly seen from these figures that PSOAPI algorithm and LPSOAPI algorithm obtain better solution with a fast convergence speed.
Comparison experiments with other PSO variants
In recent literatures, various PSO algorithms are also developed and perform well on numerical experiments. To compare with these PSO algorithms, eight PSO variants (PSOcf (Kennedy and Mendes 2002), FIPS (Mendes et al. 2004), HPSOTVAC (Ratnaweera et al. 2004), VPSO (Kennedy and Mendes 2006), DMSPSO (Liang and Suganthan 2005), CLPSO (Liang et al. 2006) and APSO (Zhan et al. 2009) are introduced to optimize ten benchmark functions, which are f _{1}(x), f _{2}(x), f _{3}(x), f _{5}(x), f _{6}(x), f _{7}(x), f _{8}(x), f _{9}(x), f _{10}(x) and f _{12}(x) in section “Test benchmark functions”. Table 2 shows their parameters settings and their results are from the corresponding paper (Zhan et al. 2009). The generations are 2 × 10^{5} and dimension number is 30. The size of population is 20. All the problems are optimized 30 times. Parameters settings of PSOAPI and other settings are identical to that in last section. The comparisons of these PSO algorithms are shown in Table 3 in terms of average best fitness (Best) and standard deviation (SD), rank (Rank), average rank (AR) and final rank (FR) of average best fitness. Note that the best results for each benchmark function are marked in bold in Table 3.
From Table 3, the data of Rank demonstrates that PSOAPI algorithm obtains best results on f _{1}(x), f _{2}(x), f _{3}(x), f _{5}(x), f _{7}(x), f _{8}(x), f _{9}(x), f _{10}(x) and performs worst on f _{6}(x) and f _{12}(x). Table 3 also shows FR obtained by PSOAPI algorithms is better than that obtained by other eight PSO variants. It can be concluded that PSOAPI algorithm has the highest comprehensive performance among them. Consequently, the comparisons indicate that PSOAPI algorithm has the best overall performance over several existing PSO variants and is an effective method for solving a variety of optimization problems.
Time complexity of algorithm also should be considered and a computational experiment of six PSO variants [PSOcf (Kennedy and Mendes 2002), FIPS (Mendes et al. 2004), DMSPSO (Liang and Suganthan 2005), CLPSO (Liang et al. 2006), LPSO (Shi and Eberhart 1998c) and PSOAPI] is performed over 20 independent runs and the execution times of these algorithms are compared. In the experiment, parameters sittings of these algorithms are the same as Table 2. The population, dimension and generations are 20, 30 and 3000, respectively. Table 4 lists CPU times (in seconds) of six PSO algorithms. In Table 4, ‘AV(CPU)’ and ‘Rank’ stand for the average CPU time over 20 runs and the ascending order of each ‘AV(CPU)’, respectively. ‘AR’ and ‘FR’ stand for the average rank of Rank and the ascending order of AR, respectively.
From Rank of LPSO and PSOAPI algorithm, we can conclude that our proposed policy adding to the original PSO increases the computational time. In Table 4, AR reflects comprehensive timeconsuming order of the algorithm for twenty benchmarks. From Table 4, the value of AR for PSOcf and LPSO are smallest among all six algorithms. It illustrates that PSOcf and LPSO, which are better than our proposed algorithm, have the best CPU time. The value of ‘AR’ for PSOAPI algorithm and CLPSO are highly close to each other and it demonstrates that they have similar overall time consumption. The value of ‘AR’ for PSOcf and LPSO are ‘4.7’ and ‘5.45’, which are both worse than PSOAPI algorithm. From the value of ‘FR’, although PSOAPI algorithm only ranks four, it is worthy of spending time to improve the accuracy of PSO algorithm. It is clear from the above comparisons of the accuracy and time consumption that PSOAPI algorithm has a good overall balance between the performance and time complexity.
Comparisons experiments with similar PSO algorithms
In order to compare with FSS (Carmelo Filho et al. 2008) and CenterPSO (Liu et al. 2007), several experiments are carried out in this section. To compare with FSS algorithm, the experiments settings are as follows: five benchmarks with 30 dimensions are used to assess the algorithms. In detail, Generalized Rosenbrock Function and f _{3}(x), f _{7}(x), f _{9}(x), f _{10}(x) in section “Test benchmark functions” are introduced. Generalized Rosenbrock Function is denoted as f _{21}(x), and the expression of f _{21}(x) is shown as follows.
Population size of PSOAPI sets as 30. 30 runs are conducted for each problem and each run will perform 1 × 10^{4} generations. To compare with CenterPSO algorithm, the experiments settings are as follows: Three benchmarks f _{7}(x), f _{10}(x), f _{21}(x) with 30 dimensions are used. The generations is 2000. The population has four sizes of 20, 40, 80, 160. Each experiment will perform 100 runs. Average best fitness (Avg. best fitness) and standard deviation (SD) of PSOAPI, FSS and CenterPSO are presented in Tables 5 and 6. It’s worth noting that the better results are marked in bold in Tables 5 and 6.
From the data in Table 5, it can be seen that PSOAPI obtains better average best fitness and standard deviation than that obtained by FSS algorithm for all five benchmarks except Generalized Rosenbrock Function. However, for Generalized Rosenbrock Function, PSOAPI and FSS algorithm obtain the results with the same order of magnitude. From the Table 6, the results obtained by PSOAPI with all population sizes are better than that obtained by CenterPSO algorithm for all three benchmarks. Therefore, statistics analysis indicates the proposed algorithm have better performance than FSS algorithm and CenterPSO algorithm. For most of the benchmarks, all above experiments indicates that PSOAPI is a highperformance algorithm.
Conclusions
In this work, to make full use of multiinformation characteristics of all personalbest information, an improved PSO algorithm using three positions with all personalbest information has been adopted to enhance the performance. In proposed algorithm, an improved cognition term using the personalbest position, the centroid position and the median position is introduced in velocity update process of PSO. To validate this strategy, a set of benchmark functions including unimodal, multimodal, rotated and shifted benchmark functions with 20, 30 and 50 dimensions have been optimized. Experimental results show that the strategy using multiinformation characteristics of all personalbest information is a valid strategy for the purposes of improving the PSO’s performance. Moreover, PSOAPI algorithm has also been used to compare with several PSO variants and some similar algorithms of the proposed algorithm. Numerical results show that the PSOAPI algorithm has higher precision and satisfied performance. To sum up, the proposed strategy enhances the search ability of PSO and PSOAPI algorithm is an efficient PSO variant to obtain promising solution for most of benchmark functions.
References
Beheshti Z, Shamsuddin SMH, Hasan S (2013) MPSO: medianoriented particle swarm optimization. Appl Math Comput 219(11):5817–5836
Beielstein T, Parsopoulos KE, Vrahatis MN (2002) Tuning PSO parameters through sensitivity analysis. Universität Dortmund
Bonyadi MR, Li X, Michalewicz Z (2014) A hybrid particle swarm with a timeadaptive topology for constrained optimization. Swarm Evol Comput 18:22–37
Carmelo Filho JA, De Lima Neto FB, Lins AJCC et al (2008) A novel search algorithm based on fish school behavior. In: Proceedings of the 2008 IEEE international conference on systems, man and cybernetics, pp 2646–2651
Cheng R, Jin Y (2015) A social learning particle swarm optimization algorithm for scalable optimization. Inf Sci 291:43–60
Deep K, Thakur M (2007) A new crossover operator for real coded genetic algorithms. Appl Math Comput 188(1):895–911
Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, vol 1, pp 39–43
Haklı H, Uğuz H (2014) A novel particle swarm optimization algorithm with Levy flight. Appl Soft Comput 23:333–345
Hu W, Yen GG (2015) Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Trans Evol Comput 19(1):1–18
Kennedy ER (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Perth, Australia, Piscataway, vol 4, 1942–1948
Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of the 2002 congress on evolutionary computation, vol 2, pp 1671–1676
Kennedy J, Mendes R (2006) Neighborhood topologies in fully informed and bestofneighborhood particle swarms. IEEE Trans Syst Man Cybern C Appl Rev 36(4):515–519
Li Y, Zhan ZH, Lin S et al (2015) Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems. Inf Sci 293:370–382
Liang JJ, Suganthan PN (2005) Dynamic multiswarm particle swarm optimizer. In: Proceedings of the 2005 congress on swarm intelligence symposium, vol 8237, pp 124–129
Liang JJ, Qin AK, Suganthan PN et al (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295
Lim WH, Isa NAM (2014a) An adaptive twolayer particle swarm optimization with elitist learning strategy. Inf Sci 273:49–72
Lim WH, Isa NAM (2014b) Particle swarm optimization with adaptive timevarying topology connectivity. Appl Soft Comput 24:623–642
Lim WH, Isa NAM (2014c) Particle swarm optimization with increasing topology connectivity. Eng Appl Artif Intell 27:80–102
Lim WH, Isa NAM (2014d) Bidirectional teaching and peerlearning particle swarm optimization. Inf Sci 280:111–134
Lim WH, Isa NAM (2014e) Teaching and peerlearning particle swarm optimization. Appl Soft Comput 18:39–58
Liu Y, Qin Z, Shi Z et al (2007) Center particle swarm optimization. Neurocomputing 70(4):672–679
Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210
Qin Q, Cheng S, Zhang Q et al (2015) Multiple strategies based orthogonal design particle swarm optimizer for numerical optimization. Comput Oper Res 60:91–110
Rao RV, Patel V (2013) An improved teachinglearningbased optimization algorithm for solving unconstrained optimization problems. Scientia Iranica 20(3):710–720
Ratnaweera A, Halgamuge SK, Watson HC (2004) Selforganizing hierarchical particle swarm optimizer with timevarying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255
Shi Y, Eberhart RC (1998) Parameter selection in particle swarm optimization. Evolutionary programming VII, vol 1447. Springer, Berlin, pp 591–600
Shi Y, Eberhart R (1998b) A modified particle swarm optimizer. In: Proceedings of the 1998 IEEE international conference on evolutionary computation, vol 6, pp 69–73
Shi Y, Eberhart R (1998c) A modified particle swarm optimizer. In: IEEE international conference on evolutionary computation, the 1998 IEEE international conference on computational intelligence, pp 69–73
Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. Proc IEEE Congr Evol Comput 3:1945–1950
Suganthan PN, Hansen N, Liang JJ et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on realparameter optimization. In: Proceedings of IEEE congress on evolutionary computation, pp 1–50
Sun S, Li J (2014) A twoswarm cooperative particle swarms optimization. Swarm Evol Comput 15:1–18
Wang L, Yang B, Chen Y (2014) Improving particle swarm optimization using multilayer searching strategy. Inf Sci 274:70–94
Yadav A, Deep K (2014) An efficient coswarm particle swarm optimization for nonlinear constrained optimization. J Comput Sci 5(2):258–268
Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102
Zhan ZH, Zhang J, Li Y et al (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 39(6):1362–1381
Zhang W, Ma D, Wei J et al (2014) A parameter selection strategy for particle swarm optimization based on particle positions. Exp Syst Appl 41(7):3576–3584
Authors’ contributions
S.H. carried out the study, collected data, designed the experiments, implemented the simulation, analyzed data and wrote the main manuscript. N.T. provided some intellectual information and revised the manuscript. Y.W. gave technical support and helped to the design of the study. Z.J. made the general supervision of the research. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests in this study.
Funding
In this work, the design of the study and collection, analysis, and interpretation of data are funded by the National Hightech Research and Development Projects of China under Grant No: 2014AA041505 and the writing of the manuscript is funded by the National Natural Science Foundation of China under Grant No: 61572238 and by the Provincial Outstanding Youth Foundation of Jiangsu Province under Grant No: BK20160001.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 Premature convergence
 Intelligence algorithm
 Particle swarm optimization
 Personalbest position