Skip to main content

Particle swarm optimization using multi-information characteristics of all personal-best information

Abstract

Convergence stagnation is the chief difficulty to solve hard optimization problems for most particle swarm optimization variants. To address this issue, a novel particle swarm optimization using multi-information characteristics of all personal-best information is developed in our research. In the modified algorithm, two positions are defined by personal-best positions and an improved cognition term with three positions of all personal-best information is used in velocity update equation to enhance the search capability. This strategy could make particles fly to a better direction by discovering useful information from all the personal-best positions. The validity of the proposed algorithm is assessed on twenty benchmark problems including unimodal, multimodal, rotated and shifted functions, and the results are compared with that obtained by some published variants of particle swarm optimization in the literature. Computational results demonstrate that the proposed algorithm finds several global optimum and high-quality solutions in most case with a fast convergence speed.

Background

Particle swarm optimization (PSO) is a bio-inspired optimization algorithm introduced by Eberhart and Kennedy (1995), and is enlightened by the interaction and communication of bird flocking or fish schooling. PSO has attracted a great deal of attention as a treatment for high-dimensional nonlinear optimization problem due to its better computational efficiency and simple implementation. With the development of intelligent manufacturing and complex system, many engineering problems are becoming increasingly complex to optimize, and thus time-consuming computation and premature convergence often occurs in complicated optimization process. Therefore, many PSO variants with new techniques have been proposed to address the above problems.

Some researchers got insight into three control parameters, named after acceleration coefficients and inertia weight, to develop PSO variants (Beielstein et al. 2002; Zhang et al. 2014; Shi and Eberhart 1998a, b). In (Shi and Eberhart 1998b), linearly decreasing inertia weight particle swarm optimization (LPSO) was developed by modified inertia weight and the introduction of this dynamic inertia weight highly strengthened the performance of PSO algorithm. In recent research, multiple swarms or multiple layers strategies had already been proved to be an effective strategy to improve the performance of PSO (Sun and Li 2014; Yadav and Deep 2014; Lim and Isa 2014a; Wang et al. 2014). Sun and Li presented a cooperative particle swarm optimization (TCPSO) with two-swarm (the slave swarm and the master swarm) for optimization problem in large scale search space (Sun and Li 2014) and two subswarms using shrinking hypersphere PSO (SHPSO) and DE were also used in new co-swarm PSO for constrained optimization problems (Yadav and Deep 2014). Multiple layers strategies, such as adaptive two-layer particle swarm optimization algorithm with elitist learning strategy (ATLPSO-ELS) (Lim and Isa 2014a) and multi-layer particle swarm optimization (MLPSO) (Wang et al. 2014), were also used to solve complex problems. PSO with different topologies has different exploration/exploitation ability and performance (Bonyadi et al. 2014; Lim and Isa 2014b, c). Many new topology strategies [time-adaptive topology (Bonyadi et al. 2014), adaptive time-varying topology connectivity (Lim and Isa 2014b), increasing topology connectivity (Lim and Isa 2014c)] were also applied to PSO. Comparing with fully-connected topology or regular topology, these topologies could lead to a different optimization process. In recent years, new techniques such as Levy flight (Haklı and Uğuz 2014), parallel cell coordinate system (Hu and Yen 2015), competitive and cooperative (Li et al. 2015) and orthogonal design (Qin et al. 2015) had also been adopted in PSO.

Many learning strategies are introduced to PSO to enhance the adaptability for complex optimization problems as learning behavior stemming from social animals plays a key role in animals’ adaptation to the changing environment (Cheng and Jin 2015; Rao and Patel 2013; Lim and Isa 2014d; e; Shi and Eberhart 1999). Cheng and Jin presented a modified particle swarm optimization using social learning mechanism (SL-PSO) (Cheng and Jin 2015) and some concept of teachers, tutorial training and self motivated learning was proposed in teaching–learning-based PSO by Rao and Patel for performance enhancements (Rao and Patel 2013). Using teaching and peer-learning behaviors, a bidirectional teaching and peer-learning PSO (BTPLPSO) (Lim and Isa 2014d) and a two learning phases PSO (TPLPSO) (Lim and Isa 2014e) were proposed by Lim and Isa simultaneously.

Communication and learning behavior is a distinguishing feature among social animals and it improves social efficiency. Sharing information mechanism plays a key role in this behavior. To share personal-best information fairness, a particle swarm optimizer using several multi-information characteristics of all personal-best information is developed in this paper. In the proposed PSO, two representative positions, which represent the features of all personal-best positions, are defined to acquiring the information of all personal-best positions. Then the cognition term in velocity update equation is formed by three positions. Due to the effect of all personal-best fitnesses, each particle can update its velocity and position by the distribution of personal-best fitnesses. This strategy could make full use of all personal-best information and correct some error guided directions of personal-best positions.

The structure of rest paper is as follows. Section “Particle swarm optimizer” presents the theory and formulation of PSO algorithm and linearly decreasing inertia weight. In section “Particle swarm optimization using all personal-best information”, the details of two representative positions are described and the proposed PSO using several multi-information characteristics of all personal-best positions is provided. Numerical results and statistical analysis are shown in section “Experiments and results”. In section “Conclusions”, we conclude this paper.

Particle swarm optimizer

Velocity and position formulation

Particle swarm optimizer is inspired by fish’s and birds’ foraging behaviors, which are simplified as a swarm of particles by mimicking their key behaviors. As a swarm of n particles search in the feasible space, each particle’s position represents a potential solution for the optimization problem and the swarm can find high-quality solution though the particles update their velocities and positions. Assuming the decision has m variables, the position and velocity of particle i are represented by m-dimensional vector \(\varvec{x}_{i} = (x_{i1} ,x_{i2} , \ldots ,x_{{i{\text{m}}}} )\) and \(\varvec{v}_{i} = (v_{i1} ,v_{i2} , \ldots ,v_{{i{\text{m}}}} )\). Two positions, named personal-best position and global-best position, are defined in PSO to update the velocities and guide the swarm. Personal-best position of particle i is denoted as \(\varvec{p}_{{{\text{best}},i}} = (p_{{{\text{best}},i1}} ,p_{{{\text{best}},i2}} , \ldots ,p_{{{\text{best}},i{\text{m}}}} )\) and global-best position of particle i is denoted as \(\varvec{g}_{\text{best}} = (g_{{{\text{best}},1}} ,g_{{{\text{best}},2}} , \ldots ,g_{{{\text{best}},{\text{m}}}} )\). To sum up, the formulations of the velocity \(\varvec{v}_{i}^{t + 1}\) and the position \(\varvec{x}_{i}^{t + 1}\) of particle i can be expressed by the Eq. (1) and (2).

$$\varvec{v}_{i}^{t + 1} = \omega \varvec{v}_{i}^{t} + c_{1} r_{1} \left( {\varvec{p}_{{{\text{best}},i}}^{t} - \varvec{x}_{i}^{t} } \right) + c_{2} r_{2} \left( {\varvec{g}_{\text{best}}^{t} - \varvec{x}_{i}^{t} } \right)$$
(1)
$$\varvec{x}_{i}^{t + 1} = \varvec{x}_{i}^{t} + \varvec{v}_{i}^{t + 1}$$
(2)

where c 1, c 2 are cognitive factor and social factor. ω is inertia weight. r 1, r 2 are two real numbers randomly in (0, 1). t is the current generation. According to the theory of PSO, the personal experience and global experience make the particle move closer to them to get a new promising position.

Linearly decreasing inertia weight

Appropriate selection of inertia weight can balance global exploration and local exploitation during the evolution process. Large ω can benefit the global search while small value can contribute to local exploitation. Linearly decreasing inertia weight adopted in PSO (LPSO) significantly improves the performance of PSO for solving various optimization problems and the inertia weight ω is advised by the Eq. (3):

$$\omega ={\upomega}_{\rm max } - ({\upomega}_{\rm max } -{\upomega}_{\rm min } )\frac{t}{T}$$
(3)

where \({\text{T}}\) is the maximal generation. \({\upomega}_{\rm{min} }\) and \({\upomega}_{\rm max }\) are the upper limit and lower limit. Numerical experiments illustrated the impact of ω, and 0.9 (upper value) and 0.4 (lower value) are suggested (Shi and Eberhart 1999).

From the above description, LPSO pseudo-code is shown in Algorithm 1.

figure a

Particle swarm optimization using all personal-best information

Analysis of personal-best information

Learning behavior is a special skill for social animals, which can share the information with their members. Cooperative behavior of a swarm is more efficient than one taking an action alone due to their fruitful information and communication. In PSO, each particle can provide its personal-best position information to guide its flying direction. The whole personal-best positions of the swarm imply the distribution of fruitful good-fitness-related information. To take full advantage of multi-information characteristics of all personal-best information will contribute to ignoring several particles’ error information trapping in local optima. In the theory of PSO, personal-best position is only used for its own particle in evolutionary process, not reflecting the influence of fitness distribution in landscape. Misguided information of personal-best positions, which have no opportunities to be corrected, will make PSO premature. Therefore, two positions, which add the influence of personal-best fitness distribution, are defined to strengthen the particle’s ability to learn from other particles’ experience. Then cognition term with three defined personal-best positions in velocity update equation is formed to reduce the misguided opportunity. The details of the improved cognition term and the proposed PSO algorithm are as follows.

Detail of improved PSO algorithm

Step 1 Calculate all personal-best positions’ fitnesses, and then figure out the minimal fitness and the maximal fitness among these personal-best fitnesses. The way to find the minimal fitness and the maximal fitness is as follows:

$$f_{\rm min } = \hbox{min} \{ f(\varvec{p}_{{{\text{best}},i}} )|i = 1,2, \ldots ,{\text{n}}\}$$
(4)
$$f_{\rm max } = \hbox{max} \{ f(\varvec{p}_{{{\text{best}},i}} ) |i = 1,2, \ldots ,{\text{n}}\}$$
(5)

where f min and f max stand for the minimal fitness and the maximal fitness of personal-best positions. f denotes the fitness function.

Step 2 Normalization method of personal-best fitness.

As the fitness value varies with a wide range in various optimization problems, a robust way to suitably reflect the influence of fitness is to normalize personal-best fitness. For minimization problem, the smaller the fitness value, the stronger the influence of personal-best position. According this feature, the normalization method is as Eq. (6).

$$r_{i} = \frac{{f_{\rm max } - f(\varvec{p}_{{{\text{best}},i}} )}}{{f_{\rm max } - f_{\rm min } }}$$
(6)

where r i stands for the normalized value of the ith personal-best fitness.

Step 3 After normalized the personal-best fitness, we should also acquire the proportions of these fitnesses. The proportion of the ith personal-best fitness is denoted as θ i . Thus, for the normalized value of the ith personal-best fitness, θ i can be obtained as follows:

$$\theta_{i} = \left\{ {\begin{array}{*{20}l} {{{r_{i} } \mathord{\left/ {\vphantom {{r_{i} } {\sum\nolimits_{i = 1}^{n} {r_{i} } }}} \right. \kern-0pt} {\sum\nolimits_{i = 1}^{n} {r_{i} } }}} &\quad {{\text{if }}f_{\rm max } \ne f_{\rm min } } \\ {{1 \mathord{\left/ {\vphantom {1 n}} \right. \kern-0pt} n}} &\quad {\text{otherwise}} \\ \end{array} } \right.$$

Step 4 Calculate centroid position \(\varvec{p}_{\text{centr}}\) of all personal-best positions:

$$p_{{{\text{centr}},j}} = \sum\limits_{i = 1}^{n} {p_{{{\text{best}},ij}} } \theta_{i}$$
(7)

Centroid position is defined as weighted sum form of \({\mathbf{p}}_{\text{best}}\) with \(\varvec{\theta}\) to reflect the influence of personal-best fitnesses. Similar to the relation of an object's density and mass in physics, by regarding personal-best fitness as ‘the density of an object’ and personal-best position as ‘the location in the object’, the position \(\varvec{p}_{\text{centr}}\) can be seen as ‘the centroid of the object’. The centroid of an object is important factor to reflect the distribution of mass and thus \(\varvec{p}_{\text{centr}}\) reflects the distribution of high quality fitness. The centroid position is always close to the area where most good fitnesses locate.

Step 5 Calculate median position \(\varvec{p}_{\text{med}}\) of all personal-best positions.

\(\varvec{p}_{\text{med}}\) represents the position of the median personal-best fitness. \(\varvec{p}_{\text{med}}\) also reflects the distribution of high quality fitness from another perspective. \(\varvec{p}_{\text{med}}\) is obtained without weighted sum form and can avoid the influence of some bad personal-best positions. Algorithm 2 is Pseudo-code to find the median fitness \(\theta_{\text{med}}\) and the median position \(\varvec{p}_{\text{med}}\).

figure b

Step 6 Cognitive guiding position \({\mathbf{p}}_{\text{best}}^{\prime }\).

In the proposed PSO, cognitive guiding position using the above defined positions is calculated according to the following equations:

$$\varvec{p}_{{{\text{best}},i}}^{\prime } = \frac{{\varvec{p}_{{{\text{best}},i}} + \varvec{p}_{\text{centr}} - \varvec{p}_{\text{med}} }}{2}$$
(8)

The cognitive guiding position includes three positions, the personal-best position \({\mathbf{p}}_{\text{best}}\), the centroid position \(\varvec{p}_{\text{centr}}\) and the median position \(\varvec{p}_{\text{med}}\). \({\mathbf{p}}_{\text{best}}\) and \(\varvec{p}_{\text{centr}} - \varvec{p}_{\text{med}}\) are used to ‘pull’ the particle to escape local optimum because some error information of \({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\) may accelerate premature convergence. \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) carry all personal-best information and can guide particles to a better direction. The experimental coefficient of 1/2 makes the cognitive guiding position suitable for the improved cognition term.

Step 7 Improved cognition term \({\mathbf{a}}_{\text{cog}}\).

$$\varvec{a}_{{{\text{cog}},i}} = \sum\limits_{i = 1}^{n} {\varvec{p}_{{{\text{best}},i}}^{\prime } \theta_{i} } - \varvec{x}_{i}$$
(9)

The improved cognition term \({\mathbf{a}}_{\text{cog}}\) will make full use of all personal-best fitnesses.

Step 8 Modified velocity update equation.

In this step, the work is to replace original cognition term with the improved cognition term \({\mathbf{a}}_{\text{cog}}\) in velocity update equation of PSO and LPSO algorithm. Therefore, particle swarm optimizer using multi-information characteristics of all personal-best information (PSO-API) and Linearly decreasing inertia weight PSO-API (LPSO-API) can be obtained using this modified velocity update equation. Take LPSO-API algorithm for example, each particle’s velocity updates as Eq. (10).

$$\varvec{v}_{i}^{t + 1} = \omega \varvec{v}_{i}^{t} + r_{1} \cdot \varvec{a}_{{{\text{cog}},i}}^{t} + c \cdot r_{2} \cdot \left( {\varvec{g}_{\text{best}}^{t} - \varvec{x}_{i}^{t} } \right)$$
(10)

Not considering the influence of the current velocity and all the coefficients, there are four positions (\({\mathbf{p}}_{\text{best}}\),\(\varvec{g}_{\text{best}}\),\(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\)) to influence the velocity update in Eq. (10). In PSO, \(\Delta \varvec{v}^{\prime } = \varvec{g}_{\text{best}} + \varvec{p}_{{{\text{best}},i}}\) is introduced to show the influence of \(\varvec{g}_{\text{best}}\) and \(\varvec{p}_{{{\text{best}},i}}\). As is illustrated in Fig. 1b, if the current iteration \(\varvec{g}_{\text{best}}\) is local optimum, \(\Delta \varvec{v}^{\prime }\) will accelerate the particles fall into local optimum region. Comparing with PSO, \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) is added to velocity update equation in PSO-API. In Fig. 1a, the white circle points represents the personal-best positions with worse fitnesses and the grey circle points represents the personal-best positions with better fitnesses. From the distribution of these above points, the location of \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\), which are calculated by Eq. (7) and Algorithm 2, are shown by the yellow circle points in Fig. 1. Defined by all personal-best positions and their fitnesses, \(\varvec{p}_{\text{centr}}\) is more close to the region which locates many personal-best positions with good fitnesses. Although the fitness around the real global-best position is worse than that of local optimum \(\varvec{g}_{\text{best}}\), the personal-best positions are also prone to distribute in these positions with good fitnesses around real global-best position, which is the black rhombic point in Fig. 1a. Regard \(\varvec{p}_{\text{centr}}\) as a reference point, and \(\Delta \varvec{v}^{\prime \prime } = \varvec{p}_{{{\text{best}},i}} - \varvec{p}_{\text{med}}\), which carrys all personal-best information, represents the influence of good fitness distribution. As is illustrated in Fig. 1c, α represents the direction adjusted by \(\Delta \varvec{v}^{\prime \prime }\) and \(\Delta \varvec{v}^{\prime \prime }\) makes particles adjust their directions to the real global-best position. Constantly adjusted by α in the search process, the particles have a greater probability of flying to the real global-best position. Besides, \(|\Delta \varvec{v}^{\prime \prime } |\) will be small value when an uniform fitness distribution occurs in the search process and \(|\Delta \varvec{v}^{\prime \prime } |\) makes little effect on particles. That is, PSO-API only has \({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\) influence particles’ trajectory and PSO-API has the same performance as PSO in that case. Therefore, three terms (\(\Delta \varvec{v}^{\prime \prime }\),\({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\)) contribute to adjusting the velocity and different ‘pull’ and ‘push’ influence make PSO-API have a stable performance over a variety of problems. The flowchart of LPSO-API algorithm is shown in Fig. 2.

Fig. 1
figure 1

Influence of \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) in search process. a Distribution of p centr and p med, b Influence of all defined best positions, c Adjustment of each velocity influenced by all best positions

Fig. 2
figure 2

Flowchart of LPSO-API algorithm

Experiments and results

Test benchmark functions

In order to assess the performance of the proposed algorithm, twenty benchmark problems including unimodal, multimodal, rotated and shifted functions selected from the literature (Deep and Thakur 2007; Liang et al. 2006; Suganthan et al. 2005; Yao et al. 1999) are used to verify it. Note that all the problems are minimum problems and only one global optimum exists. The function name, dimensions, search range and global optimum value are listed in Table 1 and the formulations of these problems are listed as follows:

Table 1 Twenty benchmark problems
  1. 1.

    Sphere Function (unimodal function)

    $$f_{1} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} }$$
  2. 2.

    Schewefel’s Problem 2.22 (unimodal function)

    $$f_{2} (x) = \sum\limits_{i = 1}^{n} {\left| {x_{i} } \right| + \mathop \prod \limits_{i = 1}^{n} \left| {x_{i} } \right|}$$
  3. 3.

    Schewefel’s Problem 1.2 (unimodal function)

    $$f_{3} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{j = 1}^{i} {x_{j} } } \right)^{2} }$$
  4. 4.

    Schewefel’s Problem 2.21 (unimodal function)

    $$f_{4} (x) = \mathop {\hbox{max} }\limits_{i} \{ \left. {\left| {x_{i} } \right|,1 \le i \le n} \right\}$$
  5. 5.

    Step Function (unimodal function)

    $$f_{5} (x) = \sum\limits_{i = 1}^{n} {\left( {\left| {x_{i} + 0.5} \right|} \right)}^{2}$$
  6. 6.

    Quartic Function, i.e. Noise (unimodal function)

    $$f_{6} (x) = \sum\limits_{i = 1}^{n} {ix_{i}^{4} } + random[0,1)$$
  7. 7.

    Generalized Rastrigin’s Function (multimodal function)

    $$f_{7} (x) = \sum\limits_{i = 1}^{n} {\left[ {x_{i}^{2} - 10\cos (2\pi x_{i} ) + 10} \right]}$$
  8. 8.

    Non-continuous Rastrigin’s Function (multimodal function)

    $$\begin{aligned} f_{8} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2} - 10\cos (2\pi y_{i} ) + 10} \right]} \hfill \\ {\text{where}}\quad {\kern 1pt} y_{i} = \left\{ {\begin{array}{*{20}c} {x_{i} } \\ {\frac{{round(2x_{i} )}}{2}} \\ \end{array} } \right.\quad {\kern 1pt} \begin{array}{*{20}c} {\left| {x_{i} } \right| \le 0.5} \\ {\left| {x_{i} } \right| \ge 0.5} \\ \end{array} \hfill \\ \end{aligned}$$
  9. 9.

    Ackley’s Function (multimodal function)

    $$f_{9} (x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i}^{2} } } } \right) - \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi x_{i} } } \right) + 20 + e$$
  10. 10.

    Generalized Griewank Function (multimodal function)

    $$f_{10} (x) = \frac{1}{4000}\sum\limits_{i = 1}^{n} {x_{i}^{2} } - \prod\limits_{i = 1}^{n} {\cos \left(\frac{{x_{i} }}{\sqrt i }\right)} + 1$$
  11. 11.

    Weierstrass Function (multimodal function)

    $$\begin{aligned} & f_{11} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {x_{i} + 0.5} \right)} \right)} \right]} } \right) - n} \sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20 \hfill \\ \end{aligned}$$
  12. 12.

    Generalized Penalized Function (multimodal function)

    $$\begin{aligned}& f_{12} (x) = \tfrac{\pi }{n}\left\{ {10\sin \left( {\pi y_{1} } \right) + \sum\limits_{i = 1}^{n - 1} {\left( {y_{i} - 1} \right)^{2} \left[ {1 + 10\sin^{2} \left( {\pi y_{i + 1} } \right)} \right] + \left( {y_{n} - 1} \right)^{2} } } \right\} + \sum\limits_{i = 1}^{n} u (x_{i} ,10,100,4) \hfill \\&\quad y_{i} = 1 + \frac{{x_{i} + 1}}{4},u(x_{i} ,a,k,m) = \left\{ {\begin{array}{*{20}l} {k\left( {x_{i} - a} \right)^{m} } \\ 0 \\ {k\left( { - x_{i} - a} \right)^{m} } \\ \end{array} } \right.\begin{array}{*{20}c} {x_{i} > a} \\ { - a \le x_{i} \le a} \\ {x_{i} < a} \\ \end{array} \hfill \\ \end{aligned}$$
  13. 13.

    Cosine mixture Problem (multimodal function)

    $$f_{13} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} } - 0.1\sum\limits_{i = 1}^{n} {\cos \left( {5\pi x_{i} } \right)}$$
  14. 14.

    Rotated Rastrign Function (multimodal function)

    $$f_{14} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2} - 10\cos (2\pi y_{i} ) + 10} \right], \quad y = M \times x}$$
  15. 15.

    Rotated Salomon Function (multimodal function)

    $$f_{15} (x) = 1 - \cos \left( {2\pi \sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } } } \right) + 0.1\sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } }, \quad y = M \times x$$
  16. 16.

    Rotated Rosenbrock Function (multimodal function)

    $$f_{16} (x) = \sum\limits_{i = 1}^{n - 1} {\left[ {100\left( {y_{i}^{2} - y_{i + 1} } \right)^{2} + \left( {y_{i} - 1} \right)^{2} } \right], \quad y = M \times x}$$
  17. 17.

    Rotated Elliptic Function (unimodal function)

    $$f_{17} (x) = \sum\limits_{i = 1}^{n} {\left( {10^{6} } \right)^{{{{\left( {i - 1} \right)} \mathord{\left/ {\vphantom {{\left( {i - 1} \right)} {\left( {n - 1} \right)}}} \right. \kern-0pt} {\left( {n - 1} \right)}}}} y_{i}^{2}, \quad y = M \times x}$$
  18. 18.

    Shifted Schewefel’s Problem 2.21 (unimodal function)

    $$\begin{aligned} &f_{18} (x) = \mathop {\hbox{max} }\limits_{i} \left\{ {\left| {y_{i} } \right|,1 \le i \le n} \right\} + fbias_{18}, \quad y = x - o \hfill \\&\quad {\text{where}}\quad {\kern 1pt} fbias_{18} = - 450. \hfill \\ \end{aligned}$$
  19. 19.

    Shifted Rotated Ackley’s Function (multimodal function)

    $$\begin{aligned} & f_{19} (x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {z_{i}^{2} } } } \right) - \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi z_{i} } } \right) + 20 + e + fbias_{19} \hfill \\& \quad{\text{where}}\quad {\kern 1pt} fbias_{19} = - 140, \quad z = \left( {x - o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$
  20. 20.

    Shifted Rotated Weierstrass Function (multimodal function)

    $$\begin{aligned} &f_{20} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {z_{i} + 0.5} \right)} \right)} \right]} } \right) - n} \sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} + fbias_{20} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20,\quad fbias_{20} = 90,\quad z = \left( {x - o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$

Experimental analysis

Validity of the proposed strategy

To validate the proposed strategy, PSO-API and LPSO-API algorithms are implemented on matlab 2011a to compare with PSO and LPSO algorithms. All twenty benchmarks are tested in the experiments. Parameter settings of the four algorithms are as follows: The size of the population is 30. c 1 and c 2 are both equal to 2 in PSO and LPSO algorithms, and c is equal to 2 in PSO-API and LPSO-API algorithms. ω is equal to 0.7 in PSO and PSO-API algorithms and uses the suggested linearly decreasing version of section “Linearly decreasing inertia weight” in LPSO and LPSO-API algorithms (Shi and Eberhart 1999). 20, 30 and 50 dimensions are adopted in our experiments and the generations are 5000. Also, 20 independent trials are implemented on these problems. Tables 7, 9 and 11 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of average best fitness(AVE), rank(Rank) of average best fitness, median best fitness (MED), standard deviation (SD), average rank (AR) and final rank (FR) of average best fitness.

Wilcoxon’s rank sum test is commonly used to analyze whether two data sets are statistically different from each other, and \(p{\text{ value}}\)(p), \({\text{h-value}}\)(h) and \({\text{zval}}\)(z) are acquired in Wilcoxon’s rank sum test. In this test, significance level needs to be set and a value of 0.05 significance level indicates that something occurs more than the probability of 95 %. In Wilcoxon’s rank sum test, \({\text{h-value}}\) only has three value, 1, 0, −1, which indicate that the proposed algorithm have a significantly better, same and worse performance than the compared algorithm, respectively (Beheshti et al. 2013). Tables 8, 10 and 12 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of Wilcoxon’s rank sum test. In details, the last three rows of Tables 8, 10 and 12 list the numbers of 1, 0 or −1 that \({\text{h-value}}\) equals. Note that the best results for each benchmark function are marked in bold in Tables 712.

From 20, 30 and 50 dimensions’ results in Tables 7, 9 and 11, it is clearly that LPSO-API algorithm obtains the minimum value in terms of AVE on twelve, fourteen, and fifteen of twenty benchmark problems, respectively and PSO-API algorithm obtains the minimum value of AVE on ten, ten and eight of twenty benchmark problems, respectively. It is obviously that LPSO-API algorithm and PSO-API algorithm obtains more minimum results than LPSO algorithm and PSO algorithm in terms of AVE on the suite of benchmark problems. It is worth pointing out that several global optimums are also obtained by LPSO-API algorithm and PSO-API algorithm. The numbers of best AVE obtained by four algorithms are described in Fig. 3.

Fig. 3
figure 3

Number of best AVE obtained by four algorithms

For three different dimensions, final rank obtained by LPSO-API algorithm all takes the first place and that obtained by PSO-API algorithm are all the second. The final rank can reflect the comprehensive performance of algorithm on a suite of benchmark problems. From the rank, it is clearly seen that LPSO-API algorithm and PSO-API algorithm shows the superiority than LPSO algorithm and PSO algorithm in high-quality solutions.

From the data in Tables 8, 10 and 12, the number of \({\text{h-value = 1}}\) is 16, 17 and 17 for PSO-API algorithm and 13, 14 and 17 for LPSO-API algorithm on 20, 30 and 50 dimensions’ problems. A few \({\text{h-value = }} - 1\) and \({\text{h-value = }}0\) exist. It means that the results of LPSO-API and PSO-API algorithm statistically significantly outperform that of the PSO and LPSO algorithm. Also, by comparing the number of \({\text{h-value = 1}}\) with 20, 30, 50 dimensions’ problems, we can seen that the higher the dimension, the larger the number of \({\text{h-value = 1}}\) of LPSO-API algorithm and PSO-API algorithm. It illustrates that the LPSO-API algorithm and PSO-API algorithm perform better on high-dimension problem than low-dimension problem to some degree. From the above analysis, the proposed strategy of using all personal-best information is valid and efficient for solving most optimization problems, especially in high dimensions.

Six representative benchmark problems, two unimodal problems \(\, f_{1} (x)\) and f 5(x), two multimodal problems f 7(x) and \(f_{ 1 1} (x)\), a rotated problems \(f_{ 1 4} (x)\), a shifted problems \(f_{ 1 8} (x)\) are chosen for describing the process of fitness evolution. The evolutions of average fitness on these six problems are shown in Figs. 4a–f, 5a–f, 6a–f, respectively. Note that it is the logarithm of average fitness on vertical axis. It is clearly seen from these figures that PSO-API algorithm and LPSO-API algorithm obtain better solution with a fast convergence speed.

Fig. 4
figure 4

Evolution curves (20 dimensions). a \(\, f_{1} (x)\), b \(\, f_{5} (x)\), c \(\, f_{7} (x)\), d \(\, f_{11} (x)\), e f 14(x), f f 18(x)

Fig. 5
figure 5

Evolution curves (30 dimensions). a \(\, f_{1} (x)\), b \(\, f_{5} (x)\), c \(\, f_{7} (x)\), d \(\, f_{11} (x)\), e f 14(x), f f 18(x)

Fig. 6
figure 6

Evolution curves (50 dimensions). a \(\, f_{1} (x)\), b \(\, f_{5} (x)\), c \(\, f_{7} (x)\), d \(\, f_{11} (x)\), e f 14(x), f f 18(x)

Comparison experiments with other PSO variants

In recent literatures, various PSO algorithms are also developed and perform well on numerical experiments. To compare with these PSO algorithms, eight PSO variants (PSO-cf (Kennedy and Mendes 2002), FIPS (Mendes et al. 2004), HPSO-TVAC (Ratnaweera et al. 2004), VPSO (Kennedy and Mendes 2006), DMS-PSO (Liang and Suganthan 2005), CLPSO (Liang et al. 2006) and APSO (Zhan et al. 2009) are introduced to optimize ten benchmark functions, which are f 1(x), f 2(x), f 3(x), f 5(x), f 6(x), f 7(x), f 8(x), f 9(x), f 10(x) and f 12(x) in section “Test benchmark functions”. Table 2 shows their parameters settings and their results are from the corresponding paper (Zhan et al. 2009). The generations are 2 × 105 and dimension number is 30. The size of population is 20. All the problems are optimized 30 times. Parameters settings of PSO-API and other settings are identical to that in last section. The comparisons of these PSO algorithms are shown in Table 3 in terms of average best fitness (Best) and standard deviation (SD), rank (Rank), average rank (AR) and final rank (FR) of average best fitness. Note that the best results for each benchmark function are marked in bold in Table 3.

Table 2 Parameters settings of PSO variants
Table 3 Numerical results for the comparisons

From Table 3, the data of Rank demonstrates that PSO-API algorithm obtains best results on f 1(x), f 2(x), f 3(x), f 5(x), f 7(x), f 8(x), f 9(x), f 10(x) and performs worst on f 6(x) and f 12(x). Table 3 also shows FR obtained by PSO-API algorithms is better than that obtained by other eight PSO variants. It can be concluded that PSO-API algorithm has the highest comprehensive performance among them. Consequently, the comparisons indicate that PSO-API algorithm has the best overall performance over several existing PSO variants and is an effective method for solving a variety of optimization problems.

Time complexity of algorithm also should be considered and a computational experiment of six PSO variants [PSO-cf (Kennedy and Mendes 2002), FIPS (Mendes et al. 2004), DMS-PSO (Liang and Suganthan 2005), CLPSO (Liang et al. 2006), LPSO (Shi and Eberhart 1998c) and PSO-API] is performed over 20 independent runs and the execution times of these algorithms are compared. In the experiment, parameters sittings of these algorithms are the same as Table 2. The population, dimension and generations are 20, 30 and 3000, respectively. Table 4 lists CPU times (in seconds) of six PSO algorithms. In Table 4, ‘AV(CPU)’ and ‘Rank’ stand for the average CPU time over 20 runs and the ascending order of each ‘AV(CPU)’, respectively. ‘AR’ and ‘FR’ stand for the average rank of Rank and the ascending order of AR, respectively.

Table 4 Computational time of six PSO algorithms

From Rank of LPSO and PSO-API algorithm, we can conclude that our proposed policy adding to the original PSO increases the computational time. In Table 4, AR reflects comprehensive time-consuming order of the algorithm for twenty benchmarks. From Table 4, the value of AR for PSO-cf and LPSO are smallest among all six algorithms. It illustrates that PSO-cf and LPSO, which are better than our proposed algorithm, have the best CPU time. The value of ‘AR’ for PSO-API algorithm and CLPSO are highly close to each other and it demonstrates that they have similar overall time consumption. The value of ‘AR’ for PSO-cf and LPSO are ‘4.7’ and ‘5.45’, which are both worse than PSO-API algorithm. From the value of ‘FR’, although PSO-API algorithm only ranks four, it is worthy of spending time to improve the accuracy of PSO algorithm. It is clear from the above comparisons of the accuracy and time consumption that PSO-API algorithm has a good overall balance between the performance and time complexity.

Comparisons experiments with similar PSO algorithms

In order to compare with FSS (Carmelo Filho et al. 2008) and CenterPSO (Liu et al. 2007), several experiments are carried out in this section. To compare with FSS algorithm, the experiments settings are as follows: five benchmarks with 30 dimensions are used to assess the algorithms. In detail, Generalized Rosenbrock Function and f 3(x), f 7(x), f 9(x), f 10(x) in section “Test benchmark functions” are introduced. Generalized Rosenbrock Function is denoted as f 21(x), and the expression of f 21(x) is shown as follows.

$$f_{21} (x) = \sum\limits_{i = 1}^{n - 1} {(100(x_{i + 1} - x_{i}^{2} )^{2} + (x_{i} - 1)^{2} )} \quad { (} - 1 0 0\le x_{i} \le 100 )$$

Population size of PSO-API sets as 30. 30 runs are conducted for each problem and each run will perform 1 × 104 generations. To compare with CenterPSO algorithm, the experiments settings are as follows: Three benchmarks f 7(x), f 10(x), f 21(x) with 30 dimensions are used. The generations is 2000. The population has four sizes of 20, 40, 80, 160. Each experiment will perform 100 runs. Average best fitness (Avg. best fitness) and standard deviation (SD) of PSO-API, FSS and CenterPSO are presented in Tables 5 and 6. It’s worth noting that the better results are marked in bold in Tables 5 and 6.

Table 5 Comparison results with FSS algorithm
Table 6 Comparison results with CenterPSO algorithm

From the data in Table 5, it can be seen that PSO-API obtains better average best fitness and standard deviation than that obtained by FSS algorithm for all five benchmarks except Generalized Rosenbrock Function. However, for Generalized Rosenbrock Function, PSO-API and FSS algorithm obtain the results with the same order of magnitude. From the Table 6, the results obtained by PSO-API with all population sizes are better than that obtained by CenterPSO algorithm for all three benchmarks. Therefore, statistics analysis indicates the proposed algorithm have better performance than FSS algorithm and CenterPSO algorithm. For most of the benchmarks, all above experiments indicates that PSO-API is a high-performance algorithm.

Conclusions

In this work, to make full use of multi-information characteristics of all personal-best information, an improved PSO algorithm using three positions with all personal-best information has been adopted to enhance the performance. In proposed algorithm, an improved cognition term using the personal-best position, the centroid position and the median position is introduced in velocity update process of PSO. To validate this strategy, a set of benchmark functions including unimodal, multimodal, rotated and shifted benchmark functions with 20, 30 and 50 dimensions have been optimized. Experimental results show that the strategy using multi-information characteristics of all personal-best information is a valid strategy for the purposes of improving the PSO’s performance. Moreover, PSO-API algorithm has also been used to compare with several PSO variants and some similar algorithms of the proposed algorithm. Numerical results show that the PSO-API algorithm has higher precision and satisfied performance. To sum up, the proposed strategy enhances the search ability of PSO and PSO-API algorithm is an efficient PSO variant to obtain promising solution for most of benchmark functions.

References

  • Beheshti Z, Shamsuddin SMH, Hasan S (2013) MPSO: median-oriented particle swarm optimization. Appl Math Comput 219(11):5817–5836

    Article  MathSciNet  MATH  Google Scholar 

  • Beielstein T, Parsopoulos KE, Vrahatis MN (2002) Tuning PSO parameters through sensitivity analysis. Universität Dortmund

  • Bonyadi MR, Li X, Michalewicz Z (2014) A hybrid particle swarm with a time-adaptive topology for constrained optimization. Swarm Evol Comput 18:22–37

    Article  Google Scholar 

  • Carmelo Filho JA, De Lima Neto FB, Lins AJCC et al (2008) A novel search algorithm based on fish school behavior. In: Proceedings of the 2008 IEEE international conference on systems, man and cybernetics, pp 2646–2651

  • Cheng R, Jin Y (2015) A social learning particle swarm optimization algorithm for scalable optimization. Inf Sci 291:43–60

    Article  MathSciNet  Google Scholar 

  • Deep K, Thakur M (2007) A new crossover operator for real coded genetic algorithms. Appl Math Comput 188(1):895–911

    Article  MathSciNet  MATH  Google Scholar 

  • Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, vol 1, pp 39–43

  • Haklı H, Uğuz H (2014) A novel particle swarm optimization algorithm with Levy flight. Appl Soft Comput 23:333–345

    Article  Google Scholar 

  • Hu W, Yen GG (2015) Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Trans Evol Comput 19(1):1–18

    Article  Google Scholar 

  • Kennedy ER (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Perth, Australia, Piscat-away, vol 4, 1942–1948

  • Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of the 2002 congress on evolutionary computation, vol 2, pp 1671–1676

  • Kennedy J, Mendes R (2006) Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Trans Syst Man Cybern C Appl Rev 36(4):515–519

    Article  Google Scholar 

  • Li Y, Zhan ZH, Lin S et al (2015) Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems. Inf Sci 293:370–382

    Article  ADS  Google Scholar 

  • Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer. In: Proceedings of the 2005 congress on swarm intelligence symposium, vol 8237, pp 124–129

  • Liang JJ, Qin AK, Suganthan PN et al (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295

    Article  Google Scholar 

  • Lim WH, Isa NAM (2014a) An adaptive two-layer particle swarm optimization with elitist learning strategy. Inf Sci 273:49–72

    Article  ADS  MathSciNet  Google Scholar 

  • Lim WH, Isa NAM (2014b) Particle swarm optimization with adaptive time-varying topology connectivity. Appl Soft Comput 24:623–642

    Article  Google Scholar 

  • Lim WH, Isa NAM (2014c) Particle swarm optimization with increasing topology connectivity. Eng Appl Artif Intell 27:80–102

    Article  Google Scholar 

  • Lim WH, Isa NAM (2014d) Bidirectional teaching and peer-learning particle swarm optimization. Inf Sci 280:111–134

    Article  ADS  Google Scholar 

  • Lim WH, Isa NAM (2014e) Teaching and peer-learning particle swarm optimization. Appl Soft Comput 18:39–58

    Article  Google Scholar 

  • Liu Y, Qin Z, Shi Z et al (2007) Center particle swarm optimization. Neurocomputing 70(4):672–679

    Article  Google Scholar 

  • Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210

    Article  Google Scholar 

  • Qin Q, Cheng S, Zhang Q et al (2015) Multiple strategies based orthogonal design particle swarm optimizer for numerical optimization. Comput Oper Res 60:91–110

    Article  MathSciNet  Google Scholar 

  • Rao RV, Patel V (2013) An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Scientia Iranica 20(3):710–720

    MathSciNet  Google Scholar 

  • Ratnaweera A, Halgamuge SK, Watson HC (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255

    Article  Google Scholar 

  • Shi Y, Eberhart RC (1998) Parameter selection in particle swarm optimization. Evolutionary programming VII, vol 1447. Springer, Berlin, pp 591–600

  • Shi Y, Eberhart R (1998b) A modified particle swarm optimizer. In: Proceedings of the 1998 IEEE international conference on evolutionary computation, vol 6, pp 69–73

  • Shi Y, Eberhart R (1998c) A modified particle swarm optimizer. In: IEEE international conference on evolutionary computation, the 1998 IEEE international conference on computational intelligence, pp 69–73

  • Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. Proc IEEE Congr Evol Comput 3:1945–1950

    Google Scholar 

  • Suganthan PN, Hansen N, Liang JJ et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. In: Proceedings of IEEE congress on evolutionary computation, pp 1–50

  • Sun S, Li J (2014) A two-swarm cooperative particle swarms optimization. Swarm Evol Comput 15:1–18

    Article  Google Scholar 

  • Wang L, Yang B, Chen Y (2014) Improving particle swarm optimization using multi-layer searching strategy. Inf Sci 274:70–94

    Article  Google Scholar 

  • Yadav A, Deep K (2014) An efficient co-swarm particle swarm optimization for non-linear constrained optimization. J Comput Sci 5(2):258–268

    Article  Google Scholar 

  • Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102

    Article  Google Scholar 

  • Zhan ZH, Zhang J, Li Y et al (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 39(6):1362–1381

    Article  MathSciNet  PubMed  Google Scholar 

  • Zhang W, Ma D, Wei J et al (2014) A parameter selection strategy for particle swarm optimization based on particle positions. Exp Syst Appl 41(7):3576–3584

    Article  Google Scholar 

Download references

Authors’ contributions

S.H. carried out the study, collected data, designed the experiments, implemented the simulation, analyzed data and wrote the main manuscript. N.T. provided some intellectual information and revised the manuscript. Y.W. gave technical support and helped to the design of the study. Z.J. made the general supervision of the research. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests in this study.

Funding

In this work, the design of the study and collection, analysis, and interpretation of data are funded by the National High-tech Research and Development Projects of China under Grant No: 2014AA041505 and the writing of the manuscript is funded by the National Natural Science Foundation of China under Grant No: 61572238 and by the Provincial Outstanding Youth Foundation of Jiangsu Province under Grant No: BK20160001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Wang.

Appendix

Appendix

See Tables 7, 8, 9, 10, 11, 12.

Table 7 Results of twenty benchmark problems (generations = 5000 and dimensions = 20)
Table 8 Wilcoxon’s rank sum test results (generations = 5000 and dimensions = 20)
Table 9 Results of twenty benchmark problems (generations = 5000 and dimensions = 30)
Table 10 Wilcoxon’s rank sum test results (generations = 5000 and dimensions = 30)
Table 11 Results of twenty benchmark problems (generations = 5000 and dimensions = 50)
Table 12 Wilcoxon’s rank sum test results (generations = 5000 and dimensions = 50)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, S., Tian, N., Wang, Y. et al. Particle swarm optimization using multi-information characteristics of all personal-best information. SpringerPlus 5, 1632 (2016). https://doi.org/10.1186/s40064-016-3244-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40064-016-3244-8

Keywords