Open Access

Particle swarm optimization using multi-information characteristics of all personal-best information

SpringerPlus20165:1632

https://doi.org/10.1186/s40064-016-3244-8

Received: 9 May 2016

Accepted: 6 September 2016

Published: 21 September 2016

Abstract

Convergence stagnation is the chief difficulty to solve hard optimization problems for most particle swarm optimization variants. To address this issue, a novel particle swarm optimization using multi-information characteristics of all personal-best information is developed in our research. In the modified algorithm, two positions are defined by personal-best positions and an improved cognition term with three positions of all personal-best information is used in velocity update equation to enhance the search capability. This strategy could make particles fly to a better direction by discovering useful information from all the personal-best positions. The validity of the proposed algorithm is assessed on twenty benchmark problems including unimodal, multimodal, rotated and shifted functions, and the results are compared with that obtained by some published variants of particle swarm optimization in the literature. Computational results demonstrate that the proposed algorithm finds several global optimum and high-quality solutions in most case with a fast convergence speed.

Keywords

Premature convergenceIntelligence algorithmParticle swarm optimizationPersonal-best position

Background

Particle swarm optimization (PSO) is a bio-inspired optimization algorithm introduced by Eberhart and Kennedy (1995), and is enlightened by the interaction and communication of bird flocking or fish schooling. PSO has attracted a great deal of attention as a treatment for high-dimensional nonlinear optimization problem due to its better computational efficiency and simple implementation. With the development of intelligent manufacturing and complex system, many engineering problems are becoming increasingly complex to optimize, and thus time-consuming computation and premature convergence often occurs in complicated optimization process. Therefore, many PSO variants with new techniques have been proposed to address the above problems.

Some researchers got insight into three control parameters, named after acceleration coefficients and inertia weight, to develop PSO variants (Beielstein et al. 2002; Zhang et al. 2014; Shi and Eberhart 1998a, b). In (Shi and Eberhart 1998b), linearly decreasing inertia weight particle swarm optimization (LPSO) was developed by modified inertia weight and the introduction of this dynamic inertia weight highly strengthened the performance of PSO algorithm. In recent research, multiple swarms or multiple layers strategies had already been proved to be an effective strategy to improve the performance of PSO (Sun and Li 2014; Yadav and Deep 2014; Lim and Isa 2014a; Wang et al. 2014). Sun and Li presented a cooperative particle swarm optimization (TCPSO) with two-swarm (the slave swarm and the master swarm) for optimization problem in large scale search space (Sun and Li 2014) and two subswarms using shrinking hypersphere PSO (SHPSO) and DE were also used in new co-swarm PSO for constrained optimization problems (Yadav and Deep 2014). Multiple layers strategies, such as adaptive two-layer particle swarm optimization algorithm with elitist learning strategy (ATLPSO-ELS) (Lim and Isa 2014a) and multi-layer particle swarm optimization (MLPSO) (Wang et al. 2014), were also used to solve complex problems. PSO with different topologies has different exploration/exploitation ability and performance (Bonyadi et al. 2014; Lim and Isa 2014b, c). Many new topology strategies [time-adaptive topology (Bonyadi et al. 2014), adaptive time-varying topology connectivity (Lim and Isa 2014b), increasing topology connectivity (Lim and Isa 2014c)] were also applied to PSO. Comparing with fully-connected topology or regular topology, these topologies could lead to a different optimization process. In recent years, new techniques such as Levy flight (Haklı and Uğuz 2014), parallel cell coordinate system (Hu and Yen 2015), competitive and cooperative (Li et al. 2015) and orthogonal design (Qin et al. 2015) had also been adopted in PSO.

Many learning strategies are introduced to PSO to enhance the adaptability for complex optimization problems as learning behavior stemming from social animals plays a key role in animals’ adaptation to the changing environment (Cheng and Jin 2015; Rao and Patel 2013; Lim and Isa 2014d; e; Shi and Eberhart 1999). Cheng and Jin presented a modified particle swarm optimization using social learning mechanism (SL-PSO) (Cheng and Jin 2015) and some concept of teachers, tutorial training and self motivated learning was proposed in teaching–learning-based PSO by Rao and Patel for performance enhancements (Rao and Patel 2013). Using teaching and peer-learning behaviors, a bidirectional teaching and peer-learning PSO (BTPLPSO) (Lim and Isa 2014d) and a two learning phases PSO (TPLPSO) (Lim and Isa 2014e) were proposed by Lim and Isa simultaneously.

Communication and learning behavior is a distinguishing feature among social animals and it improves social efficiency. Sharing information mechanism plays a key role in this behavior. To share personal-best information fairness, a particle swarm optimizer using several multi-information characteristics of all personal-best information is developed in this paper. In the proposed PSO, two representative positions, which represent the features of all personal-best positions, are defined to acquiring the information of all personal-best positions. Then the cognition term in velocity update equation is formed by three positions. Due to the effect of all personal-best fitnesses, each particle can update its velocity and position by the distribution of personal-best fitnesses. This strategy could make full use of all personal-best information and correct some error guided directions of personal-best positions.

The structure of rest paper is as follows. Section “Particle swarm optimizer” presents the theory and formulation of PSO algorithm and linearly decreasing inertia weight. In section “Particle swarm optimization using all personal-best information”, the details of two representative positions are described and the proposed PSO using several multi-information characteristics of all personal-best positions is provided. Numerical results and statistical analysis are shown in section “Experiments and results”. In section “Conclusions”, we conclude this paper.

Particle swarm optimizer

Velocity and position formulation

Particle swarm optimizer is inspired by fish’s and birds’ foraging behaviors, which are simplified as a swarm of particles by mimicking their key behaviors. As a swarm of n particles search in the feasible space, each particle’s position represents a potential solution for the optimization problem and the swarm can find high-quality solution though the particles update their velocities and positions. Assuming the decision has m variables, the position and velocity of particle i are represented by m-dimensional vector \(\varvec{x}_{i} = (x_{i1} ,x_{i2} , \ldots ,x_{{i{\text{m}}}} )\) and \(\varvec{v}_{i} = (v_{i1} ,v_{i2} , \ldots ,v_{{i{\text{m}}}} )\). Two positions, named personal-best position and global-best position, are defined in PSO to update the velocities and guide the swarm. Personal-best position of particle i is denoted as \(\varvec{p}_{{{\text{best}},i}} = (p_{{{\text{best}},i1}} ,p_{{{\text{best}},i2}} , \ldots ,p_{{{\text{best}},i{\text{m}}}} )\) and global-best position of particle i is denoted as \(\varvec{g}_{\text{best}} = (g_{{{\text{best}},1}} ,g_{{{\text{best}},2}} , \ldots ,g_{{{\text{best}},{\text{m}}}} )\). To sum up, the formulations of the velocity \(\varvec{v}_{i}^{t + 1}\) and the position \(\varvec{x}_{i}^{t + 1}\) of particle i can be expressed by the Eq. (1) and (2).
$$\varvec{v}_{i}^{t + 1} = \omega \varvec{v}_{i}^{t} + c_{1} r_{1} \left( {\varvec{p}_{{{\text{best}},i}}^{t} - \varvec{x}_{i}^{t} } \right) + c_{2} r_{2} \left( {\varvec{g}_{\text{best}}^{t} - \varvec{x}_{i}^{t} } \right)$$
(1)
$$\varvec{x}_{i}^{t + 1} = \varvec{x}_{i}^{t} + \varvec{v}_{i}^{t + 1}$$
(2)
where c 1, c 2 are cognitive factor and social factor. ω is inertia weight. r 1, r 2 are two real numbers randomly in (0, 1). t is the current generation. According to the theory of PSO, the personal experience and global experience make the particle move closer to them to get a new promising position.

Linearly decreasing inertia weight

Appropriate selection of inertia weight can balance global exploration and local exploitation during the evolution process. Large ω can benefit the global search while small value can contribute to local exploitation. Linearly decreasing inertia weight adopted in PSO (LPSO) significantly improves the performance of PSO for solving various optimization problems and the inertia weight ω is advised by the Eq. (3):
$$\omega ={\upomega}_{\rm max } - ({\upomega}_{\rm max } -{\upomega}_{\rm min } )\frac{t}{T}$$
(3)
where \({\text{T}}\) is the maximal generation. \({\upomega}_{\rm{min} }\) and \({\upomega}_{\rm max }\) are the upper limit and lower limit. Numerical experiments illustrated the impact of ω, and 0.9 (upper value) and 0.4 (lower value) are suggested (Shi and Eberhart 1999).
From the above description, LPSO pseudo-code is shown in Algorithm 1.

Particle swarm optimization using all personal-best information

Analysis of personal-best information

Learning behavior is a special skill for social animals, which can share the information with their members. Cooperative behavior of a swarm is more efficient than one taking an action alone due to their fruitful information and communication. In PSO, each particle can provide its personal-best position information to guide its flying direction. The whole personal-best positions of the swarm imply the distribution of fruitful good-fitness-related information. To take full advantage of multi-information characteristics of all personal-best information will contribute to ignoring several particles’ error information trapping in local optima. In the theory of PSO, personal-best position is only used for its own particle in evolutionary process, not reflecting the influence of fitness distribution in landscape. Misguided information of personal-best positions, which have no opportunities to be corrected, will make PSO premature. Therefore, two positions, which add the influence of personal-best fitness distribution, are defined to strengthen the particle’s ability to learn from other particles’ experience. Then cognition term with three defined personal-best positions in velocity update equation is formed to reduce the misguided opportunity. The details of the improved cognition term and the proposed PSO algorithm are as follows.

Detail of improved PSO algorithm

Step 1 Calculate all personal-best positions’ fitnesses, and then figure out the minimal fitness and the maximal fitness among these personal-best fitnesses. The way to find the minimal fitness and the maximal fitness is as follows:
$$f_{\rm min } = \hbox{min} \{ f(\varvec{p}_{{{\text{best}},i}} )|i = 1,2, \ldots ,{\text{n}}\}$$
(4)
$$f_{\rm max } = \hbox{max} \{ f(\varvec{p}_{{{\text{best}},i}} ) |i = 1,2, \ldots ,{\text{n}}\}$$
(5)
where f min and f max stand for the minimal fitness and the maximal fitness of personal-best positions. f denotes the fitness function.

Step 2 Normalization method of personal-best fitness.

As the fitness value varies with a wide range in various optimization problems, a robust way to suitably reflect the influence of fitness is to normalize personal-best fitness. For minimization problem, the smaller the fitness value, the stronger the influence of personal-best position. According this feature, the normalization method is as Eq. (6).
$$r_{i} = \frac{{f_{\rm max } - f(\varvec{p}_{{{\text{best}},i}} )}}{{f_{\rm max } - f_{\rm min } }}$$
(6)
where r i stands for the normalized value of the ith personal-best fitness.
Step 3 After normalized the personal-best fitness, we should also acquire the proportions of these fitnesses. The proportion of the ith personal-best fitness is denoted as θ i . Thus, for the normalized value of the ith personal-best fitness, θ i can be obtained as follows:
$$\theta_{i} = \left\{ {\begin{array}{*{20}l} {{{r_{i} } \mathord{\left/ {\vphantom {{r_{i} } {\sum\nolimits_{i = 1}^{n} {r_{i} } }}} \right. \kern-0pt} {\sum\nolimits_{i = 1}^{n} {r_{i} } }}} &\quad {{\text{if }}f_{\rm max } \ne f_{\rm min } } \\ {{1 \mathord{\left/ {\vphantom {1 n}} \right. \kern-0pt} n}} &\quad {\text{otherwise}} \\ \end{array} } \right.$$
Step 4 Calculate centroid position \(\varvec{p}_{\text{centr}}\) of all personal-best positions:
$$p_{{{\text{centr}},j}} = \sum\limits_{i = 1}^{n} {p_{{{\text{best}},ij}} } \theta_{i}$$
(7)

Centroid position is defined as weighted sum form of \({\mathbf{p}}_{\text{best}}\) with \(\varvec{\theta}\) to reflect the influence of personal-best fitnesses. Similar to the relation of an object's density and mass in physics, by regarding personal-best fitness as ‘the density of an object’ and personal-best position as ‘the location in the object’, the position \(\varvec{p}_{\text{centr}}\) can be seen as ‘the centroid of the object’. The centroid of an object is important factor to reflect the distribution of mass and thus \(\varvec{p}_{\text{centr}}\) reflects the distribution of high quality fitness. The centroid position is always close to the area where most good fitnesses locate.

Step 5 Calculate median position \(\varvec{p}_{\text{med}}\) of all personal-best positions.

\(\varvec{p}_{\text{med}}\) represents the position of the median personal-best fitness. \(\varvec{p}_{\text{med}}\) also reflects the distribution of high quality fitness from another perspective. \(\varvec{p}_{\text{med}}\) is obtained without weighted sum form and can avoid the influence of some bad personal-best positions. Algorithm 2 is Pseudo-code to find the median fitness \(\theta_{\text{med}}\) and the median position \(\varvec{p}_{\text{med}}\).

Step 6 Cognitive guiding position \({\mathbf{p}}_{\text{best}}^{\prime }\).

In the proposed PSO, cognitive guiding position using the above defined positions is calculated according to the following equations:
$$\varvec{p}_{{{\text{best}},i}}^{\prime } = \frac{{\varvec{p}_{{{\text{best}},i}} + \varvec{p}_{\text{centr}} - \varvec{p}_{\text{med}} }}{2}$$
(8)

The cognitive guiding position includes three positions, the personal-best position \({\mathbf{p}}_{\text{best}}\), the centroid position \(\varvec{p}_{\text{centr}}\) and the median position \(\varvec{p}_{\text{med}}\). \({\mathbf{p}}_{\text{best}}\) and \(\varvec{p}_{\text{centr}} - \varvec{p}_{\text{med}}\) are used to ‘pull’ the particle to escape local optimum because some error information of \({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\) may accelerate premature convergence. \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) carry all personal-best information and can guide particles to a better direction. The experimental coefficient of 1/2 makes the cognitive guiding position suitable for the improved cognition term.

Step 7 Improved cognition term \({\mathbf{a}}_{\text{cog}}\).
$$\varvec{a}_{{{\text{cog}},i}} = \sum\limits_{i = 1}^{n} {\varvec{p}_{{{\text{best}},i}}^{\prime } \theta_{i} } - \varvec{x}_{i}$$
(9)

The improved cognition term \({\mathbf{a}}_{\text{cog}}\) will make full use of all personal-best fitnesses.

Step 8 Modified velocity update equation.

In this step, the work is to replace original cognition term with the improved cognition term \({\mathbf{a}}_{\text{cog}}\) in velocity update equation of PSO and LPSO algorithm. Therefore, particle swarm optimizer using multi-information characteristics of all personal-best information (PSO-API) and Linearly decreasing inertia weight PSO-API (LPSO-API) can be obtained using this modified velocity update equation. Take LPSO-API algorithm for example, each particle’s velocity updates as Eq. (10).
$$\varvec{v}_{i}^{t + 1} = \omega \varvec{v}_{i}^{t} + r_{1} \cdot \varvec{a}_{{{\text{cog}},i}}^{t} + c \cdot r_{2} \cdot \left( {\varvec{g}_{\text{best}}^{t} - \varvec{x}_{i}^{t} } \right)$$
(10)
Not considering the influence of the current velocity and all the coefficients, there are four positions (\({\mathbf{p}}_{\text{best}}\),\(\varvec{g}_{\text{best}}\),\(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\)) to influence the velocity update in Eq. (10). In PSO, \(\Delta \varvec{v}^{\prime } = \varvec{g}_{\text{best}} + \varvec{p}_{{{\text{best}},i}}\) is introduced to show the influence of \(\varvec{g}_{\text{best}}\) and \(\varvec{p}_{{{\text{best}},i}}\). As is illustrated in Fig. 1b, if the current iteration \(\varvec{g}_{\text{best}}\) is local optimum, \(\Delta \varvec{v}^{\prime }\) will accelerate the particles fall into local optimum region. Comparing with PSO, \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) is added to velocity update equation in PSO-API. In Fig. 1a, the white circle points represents the personal-best positions with worse fitnesses and the grey circle points represents the personal-best positions with better fitnesses. From the distribution of these above points, the location of \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\), which are calculated by Eq. (7) and Algorithm 2, are shown by the yellow circle points in Fig. 1. Defined by all personal-best positions and their fitnesses, \(\varvec{p}_{\text{centr}}\) is more close to the region which locates many personal-best positions with good fitnesses. Although the fitness around the real global-best position is worse than that of local optimum \(\varvec{g}_{\text{best}}\), the personal-best positions are also prone to distribute in these positions with good fitnesses around real global-best position, which is the black rhombic point in Fig. 1a. Regard \(\varvec{p}_{\text{centr}}\) as a reference point, and \(\Delta \varvec{v}^{\prime \prime } = \varvec{p}_{{{\text{best}},i}} - \varvec{p}_{\text{med}}\), which carrys all personal-best information, represents the influence of good fitness distribution. As is illustrated in Fig. 1c, α represents the direction adjusted by \(\Delta \varvec{v}^{\prime \prime }\) and \(\Delta \varvec{v}^{\prime \prime }\) makes particles adjust their directions to the real global-best position. Constantly adjusted by α in the search process, the particles have a greater probability of flying to the real global-best position. Besides, \(|\Delta \varvec{v}^{\prime \prime } |\) will be small value when an uniform fitness distribution occurs in the search process and \(|\Delta \varvec{v}^{\prime \prime } |\) makes little effect on particles. That is, PSO-API only has \({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\) influence particles’ trajectory and PSO-API has the same performance as PSO in that case. Therefore, three terms (\(\Delta \varvec{v}^{\prime \prime }\),\({\mathbf{p}}_{\text{best}}\) and \(\varvec{g}_{\text{best}}\)) contribute to adjusting the velocity and different ‘pull’ and ‘push’ influence make PSO-API have a stable performance over a variety of problems. The flowchart of LPSO-API algorithm is shown in Fig. 2.
Fig. 1

Influence of \(\varvec{p}_{\text{centr}}\) and \(\varvec{p}_{\text{med}}\) in search process. a Distribution of p centr and p med, b Influence of all defined best positions, c Adjustment of each velocity influenced by all best positions

Fig. 2

Flowchart of LPSO-API algorithm

Experiments and results

Test benchmark functions

In order to assess the performance of the proposed algorithm, twenty benchmark problems including unimodal, multimodal, rotated and shifted functions selected from the literature (Deep and Thakur 2007; Liang et al. 2006; Suganthan et al. 2005; Yao et al. 1999) are used to verify it. Note that all the problems are minimum problems and only one global optimum exists. The function name, dimensions, search range and global optimum value are listed in Table 1 and the formulations of these problems are listed as follows:
Table 1

Twenty benchmark problems

Function name

Dimensions

Search range

Global optimum

f 1(x)

20/30/50

[−100, 100]D

0

f 2(x)

20/30/50

[−10, 10]D

0

f 3(x)

20/30/50

[−100, 100]D

0

f 4(x)

20/30/50

[−100, 100]D

0

f 5(x)

20/30/50

[−100, 100]D

0

f 6(x)

20/30/50

[−1.28, 1.28]D

0

f 7(x)

20/30/50

[−5.12, 5.12]D

0

f 8(x)

20/30/50

[−5.12, 5.12]D

0

f 9(x)

20/30/50

[−32, 32]D

0

f 10(x)

20/30/50

[−600, 600]D

0

f 11(x)

20/30/50

[−0.5, 0.5]D

0

f 12(x)

20/30/50

[−50, 50]D

0

f 13(x)

20/30/50

[−1, 1]D

0

f 14(x)

20/30/50

[−5.12, 5.12]D

0

f 15(x)

20/30/50

[−100, 100]D

0

f 16(x)

20/30/50

[−100, 100]D

0

f 17(x)

20/30/50

[−1.28, 1.28]D

0

f 18(x)

20/30/50

[−100, 100]D

−450

f 19(x)

20/30/50

[−32, 32]D

−140

f 20(x)

20/30/50

[−0.5, 0.5]D

90

  1. 1.

    Sphere Function (unimodal function)

    $$f_{1} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} }$$
     
  2. 2.

    Schewefel’s Problem 2.22 (unimodal function)

    $$f_{2} (x) = \sum\limits_{i = 1}^{n} {\left| {x_{i} } \right| + \mathop \prod \limits_{i = 1}^{n} \left| {x_{i} } \right|}$$
     
  3. 3.

    Schewefel’s Problem 1.2 (unimodal function)

    $$f_{3} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{j = 1}^{i} {x_{j} } } \right)^{2} }$$
     
  4. 4.

    Schewefel’s Problem 2.21 (unimodal function)

    $$f_{4} (x) = \mathop {\hbox{max} }\limits_{i} \{ \left. {\left| {x_{i} } \right|,1 \le i \le n} \right\}$$
     
  5. 5.

    Step Function (unimodal function)

    $$f_{5} (x) = \sum\limits_{i = 1}^{n} {\left( {\left| {x_{i} + 0.5} \right|} \right)}^{2}$$
     
  6. 6.

    Quartic Function, i.e. Noise (unimodal function)

    $$f_{6} (x) = \sum\limits_{i = 1}^{n} {ix_{i}^{4} } + random[0,1)$$
     
  7. 7.

    Generalized Rastrigin’s Function (multimodal function)

    $$f_{7} (x) = \sum\limits_{i = 1}^{n} {\left[ {x_{i}^{2} - 10\cos (2\pi x_{i} ) + 10} \right]}$$
     
  8. 8.

    Non-continuous Rastrigin’s Function (multimodal function)

    $$\begin{aligned} f_{8} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2} - 10\cos (2\pi y_{i} ) + 10} \right]} \hfill \\ {\text{where}}\quad {\kern 1pt} y_{i} = \left\{ {\begin{array}{*{20}c} {x_{i} } \\ {\frac{{round(2x_{i} )}}{2}} \\ \end{array} } \right.\quad {\kern 1pt} \begin{array}{*{20}c} {\left| {x_{i} } \right| \le 0.5} \\ {\left| {x_{i} } \right| \ge 0.5} \\ \end{array} \hfill \\ \end{aligned}$$
     
  9. 9.

    Ackley’s Function (multimodal function)

    $$f_{9} (x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i}^{2} } } } \right) - \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi x_{i} } } \right) + 20 + e$$
     
  10. 10.

    Generalized Griewank Function (multimodal function)

    $$f_{10} (x) = \frac{1}{4000}\sum\limits_{i = 1}^{n} {x_{i}^{2} } - \prod\limits_{i = 1}^{n} {\cos \left(\frac{{x_{i} }}{\sqrt i }\right)} + 1$$
     
  11. 11.

    Weierstrass Function (multimodal function)

    $$\begin{aligned} & f_{11} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {x_{i} + 0.5} \right)} \right)} \right]} } \right) - n} \sum\limits_{K = 0}^{k\rm{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20 \hfill \\ \end{aligned}$$
     
  12. 12.

    Generalized Penalized Function (multimodal function)

    $$\begin{aligned}& f_{12} (x) = \tfrac{\pi }{n}\left\{ {10\sin \left( {\pi y_{1} } \right) + \sum\limits_{i = 1}^{n - 1} {\left( {y_{i} - 1} \right)^{2} \left[ {1 + 10\sin^{2} \left( {\pi y_{i + 1} } \right)} \right] + \left( {y_{n} - 1} \right)^{2} } } \right\} + \sum\limits_{i = 1}^{n} u (x_{i} ,10,100,4) \hfill \\&\quad y_{i} = 1 + \frac{{x_{i} + 1}}{4},u(x_{i} ,a,k,m) = \left\{ {\begin{array}{*{20}l} {k\left( {x_{i} - a} \right)^{m} } \\ 0 \\ {k\left( { - x_{i} - a} \right)^{m} } \\ \end{array} } \right.\begin{array}{*{20}c} {x_{i} > a} \\ { - a \le x_{i} \le a} \\ {x_{i} < a} \\ \end{array} \hfill \\ \end{aligned}$$
     
  13. 13.

    Cosine mixture Problem (multimodal function)

    $$f_{13} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} } - 0.1\sum\limits_{i = 1}^{n} {\cos \left( {5\pi x_{i} } \right)}$$
     
  14. 14.

    Rotated Rastrign Function (multimodal function)

    $$f_{14} (x) = \sum\limits_{i = 1}^{n} {\left[ {y_{i}^{2} - 10\cos (2\pi y_{i} ) + 10} \right], \quad y = M \times x}$$
     
  15. 15.

    Rotated Salomon Function (multimodal function)

    $$f_{15} (x) = 1 - \cos \left( {2\pi \sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } } } \right) + 0.1\sqrt {\sum\limits_{i = 1}^{n} {y_{i}^{2} } }, \quad y = M \times x$$
     
  16. 16.

    Rotated Rosenbrock Function (multimodal function)

    $$f_{16} (x) = \sum\limits_{i = 1}^{n - 1} {\left[ {100\left( {y_{i}^{2} - y_{i + 1} } \right)^{2} + \left( {y_{i} - 1} \right)^{2} } \right], \quad y = M \times x}$$
     
  17. 17.

    Rotated Elliptic Function (unimodal function)

    $$f_{17} (x) = \sum\limits_{i = 1}^{n} {\left( {10^{6} } \right)^{{{{\left( {i - 1} \right)} \mathord{\left/ {\vphantom {{\left( {i - 1} \right)} {\left( {n - 1} \right)}}} \right. \kern-0pt} {\left( {n - 1} \right)}}}} y_{i}^{2}, \quad y = M \times x}$$
     
  18. 18.

    Shifted Schewefel’s Problem 2.21 (unimodal function)

    $$\begin{aligned} &f_{18} (x) = \mathop {\hbox{max} }\limits_{i} \left\{ {\left| {y_{i} } \right|,1 \le i \le n} \right\} + fbias_{18}, \quad y = x - o \hfill \\&\quad {\text{where}}\quad {\kern 1pt} fbias_{18} = - 450. \hfill \\ \end{aligned}$$
     
  19. 19.

    Shifted Rotated Ackley’s Function (multimodal function)

    $$\begin{aligned} & f_{19} (x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {z_{i}^{2} } } } \right) - \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos 2\pi z_{i} } } \right) + 20 + e + fbias_{19} \hfill \\& \quad{\text{where}}\quad {\kern 1pt} fbias_{19} = - 140, \quad z = \left( {x - o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$
     
  20. 20.

    Shifted Rotated Weierstrass Function (multimodal function)

    $$\begin{aligned} &f_{20} (x) = \sum\limits_{i = 1}^{n} {\left( {\sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \left( {z_{i} + 0.5} \right)} \right)} \right]} } \right) - n} \sum\limits_{K = 0}^{k\hbox{max} } {\left[ {a^{k} \cos \left( {2\pi b^{k} \times 0.5} \right)} \right]} + fbias_{20} \hfill \\&\quad {\text{where}}\quad {\kern 1pt} a = 0.5,\quad b = 3,\quad k\hbox{max} = 20,\quad fbias_{20} = 90,\quad z = \left( {x - o} \right) \times M^{\prime } \hfill \\ \end{aligned}$$
     

Experimental analysis

Validity of the proposed strategy

To validate the proposed strategy, PSO-API and LPSO-API algorithms are implemented on matlab 2011a to compare with PSO and LPSO algorithms. All twenty benchmarks are tested in the experiments. Parameter settings of the four algorithms are as follows: The size of the population is 30. c 1 and c 2 are both equal to 2 in PSO and LPSO algorithms, and c is equal to 2 in PSO-API and LPSO-API algorithms. ω is equal to 0.7 in PSO and PSO-API algorithms and uses the suggested linearly decreasing version of section “Linearly decreasing inertia weight” in LPSO and LPSO-API algorithms (Shi and Eberhart 1999). 20, 30 and 50 dimensions are adopted in our experiments and the generations are 5000. Also, 20 independent trials are implemented on these problems. Tables 7, 9 and 11 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of average best fitness(AVE), rank(Rank) of average best fitness, median best fitness (MED), standard deviation (SD), average rank (AR) and final rank (FR) of average best fitness.

Wilcoxon’s rank sum test is commonly used to analyze whether two data sets are statistically different from each other, and \(p{\text{ value}}\)(p), \({\text{h-value}}\)(h) and \({\text{zval}}\)(z) are acquired in Wilcoxon’s rank sum test. In this test, significance level needs to be set and a value of 0.05 significance level indicates that something occurs more than the probability of 95 %. In Wilcoxon’s rank sum test, \({\text{h-value}}\) only has three value, 1, 0, −1, which indicate that the proposed algorithm have a significantly better, same and worse performance than the compared algorithm, respectively (Beheshti et al. 2013). Tables 8, 10 and 12 in “Appendix” show the comparisons of 20, 30, 50 dimensions’ results of Wilcoxon’s rank sum test. In details, the last three rows of Tables 8, 10 and 12 list the numbers of 1, 0 or −1 that \({\text{h-value}}\) equals. Note that the best results for each benchmark function are marked in bold in Tables 712.

From 20, 30 and 50 dimensions’ results in Tables 7, 9 and 11, it is clearly that LPSO-API algorithm obtains the minimum value in terms of AVE on twelve, fourteen, and fifteen of twenty benchmark problems, respectively and PSO-API algorithm obtains the minimum value of AVE on ten, ten and eight of twenty benchmark problems, respectively. It is obviously that LPSO-API algorithm and PSO-API algorithm obtains more minimum results than LPSO algorithm and PSO algorithm in terms of AVE on the suite of benchmark problems. It is worth pointing out that several global optimums are also obtained by LPSO-API algorithm and PSO-API algorithm. The numbers of best AVE obtained by four algorithms are described in Fig. 3.
Fig. 3

Number of best AVE obtained by four algorithms

For three different dimensions, final rank obtained by LPSO-API algorithm all takes the first place and that obtained by PSO-API algorithm are all the second. The final rank can reflect the comprehensive performance of algorithm on a suite of benchmark problems. From the rank, it is clearly seen that LPSO-API algorithm and PSO-API algorithm shows the superiority than LPSO algorithm and PSO algorithm in high-quality solutions.

From the data in Tables 8, 10 and 12, the number of \({\text{h-value = 1}}\) is 16, 17 and 17 for PSO-API algorithm and 13, 14 and 17 for LPSO-API algorithm on 20, 30 and 50 dimensions’ problems. A few \({\text{h-value = }} - 1\) and \({\text{h-value = }}0\) exist. It means that the results of LPSO-API and PSO-API algorithm statistically significantly outperform that of the PSO and LPSO algorithm. Also, by comparing the number of \({\text{h-value = 1}}\) with 20, 30, 50 dimensions’ problems, we can seen that the higher the dimension, the larger the number of \({\text{h-value = 1}}\) of LPSO-API algorithm and PSO-API algorithm. It illustrates that the LPSO-API algorithm and PSO-API algorithm perform better on high-dimension problem than low-dimension problem to some degree. From the above analysis, the proposed strategy of using all personal-best information is valid and efficient for solving most optimization problems, especially in high dimensions.

Six representative benchmark problems, two unimodal problems \(\, f_{1} (x)\) and f 5(x), two multimodal problems f 7(x) and \(f_{ 1 1} (x)\), a rotated problems \(f_{ 1 4} (x)\), a shifted problems \(f_{ 1 8} (x)\) are chosen for describing the process of fitness evolution. The evolutions of average fitness on these six problems are shown in Figs. 4a–f, 5a–f, 6a–f, respectively. Note that it is the logarithm of average fitness on vertical axis. It is clearly seen from these figures that PSO-API algorithm and LPSO-API algorithm obtain better solution with a fast convergence speed.
Fig. 4

Evolution curves (20 dimensions). a \(\, f_{1} (x)\), b \(\, f_{5} (x)\), c \(\, f_{7} (x)\), d \(\, f_{11} (x)\), e f 14(x), f f 18(x)

Fig. 5

Evolution curves (30 dimensions). a \(\, f_{1} (x)\), b \(\, f_{5} (x)\), c \(\, f_{7} (x)\), d \(\, f_{11} (x)\), e f 14(x), f f 18(x)

Fig. 6

Evolution curves (50 dimensions). a \(\, f_{1} (x)\), b \(\, f_{5} (x)\), c \(\, f_{7} (x)\), d \(\, f_{11} (x)\), e f 14(x), f f 18(x)

Comparison experiments with other PSO variants

In recent literatures, various PSO algorithms are also developed and perform well on numerical experiments. To compare with these PSO algorithms, eight PSO variants (PSO-cf (Kennedy and Mendes 2002), FIPS (Mendes et al. 2004), HPSO-TVAC (Ratnaweera et al. 2004), VPSO (Kennedy and Mendes 2006), DMS-PSO (Liang and Suganthan 2005), CLPSO (Liang et al. 2006) and APSO (Zhan et al. 2009) are introduced to optimize ten benchmark functions, which are f 1(x), f 2(x), f 3(x), f 5(x), f 6(x), f 7(x), f 8(x), f 9(x), f 10(x) and f 12(x) in section “Test benchmark functions”. Table 2 shows their parameters settings and their results are from the corresponding paper (Zhan et al. 2009). The generations are 2 × 105 and dimension number is 30. The size of population is 20. All the problems are optimized 30 times. Parameters settings of PSO-API and other settings are identical to that in last section. The comparisons of these PSO algorithms are shown in Table 3 in terms of average best fitness (Best) and standard deviation (SD), rank (Rank), average rank (AR) and final rank (FR) of average best fitness. Note that the best results for each benchmark function are marked in bold in Table 3.
Table 2

Parameters settings of PSO variants

PSO variant

Topology

Parameters settings

PSO-cf

Local ring

ω:0.9 − 0.4, c 1 = c 2 = 2.0

FIPS

Local ring

χ = 0.729, ∑ c i  = 4.1

HPSO-TVAC

Global star

ω:0.9 − 0.4, c 1:2.5 − 0.5, c 2:0.5 − 2.5

DMS-PSO

Dynamic multi-swarm

ω:0.9 − 0.4, m = 3, R = 5

VPSO

Local von neumann

ω:0.9 − 0.4, c 1 = c 2 = 2.0

CLPSO

Comprehensive learning

ω:0.9 − 0.4, C = 1.49455, m = 7

APSO

Global star

\(\omega :0.9,c_{1} = c_{2} = 2.0,\delta :{\text{random in [0}} . 0 5 { 0} . 1 ] , { }\sigma : 1 { - 0} . 1\)

Table 3

Numerical results for the comparisons

Name

 

PSO-cf

FIPS

HPSO-TVAC

DMS-PSO

VPSO

CLPSO

APSO

PSO-API

\(\, f_{1} (x)\)

Best

4.77e−29

3.21e−30

3.38e−41

3.85e−54

5.11e−38

1.89e−19

1.45e−150

0.00

 

SD

1.13e−28

1.91e−30

8.50e−41

1.75e−53

1.91e−37

1.49e−19

5.73e−150

0.00

 

Rank

7

6

4

3

5

8

2

1

\(\, f_{2} (x)\)

Best

2.03e−20

1.32e−17

6.9e−23

2.61e−29

6.29e−27

1.01e−13

5.15e−84

3.95e323

 

SD

2.89e−20

7.86e−18

6.89e−23

6.6e−29

8.68e−27

6.51e−14

1.44e−83

5.13e322

 

Rank

6

7

5

3

4

8

2

1

\(\, f_{3} (x)\)

Best

18.60

0.77

2.89e−7

47.5

1.44

395

1.0e−10

0.00

 

SD

30.71

0.86

2.97e−7

56.4

1.55

142

2.13e−10

0.00

 

Rank

6

4

3

7

5

8

2

1

\(\, f_{5} (x)\)

Best

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

 

SD

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

 

Rank

1

1

1

1

1

1

1

1

\(\, f_{6} (x)\)

Best

1.49e−2

2.55e3

5.54e−2

1.1e−2

1.08e−2

3.92e−3

4.66e−3

5.88e−001

 

SD

5.66e−3

6.25e4

2.08e−2

3.94e−3

3.24e−3

1.14e−3

1.7e−3

2.73e−001

 

Rank

6

1

7

5

4

2

3

8

\(\, f_{7} (x)\)

Best

34.90

29.98

2.39

28.1

34.09

2.57e−11

5.8e−15

0.00

 

SD

7.25

10.92

3.71

6.42

8.07

6.64e–11

1.01e−14

0.00

 

Rank

8

6

4

5

7

3

2

1

\(\, f_{8} (x)\)

Best

30.40

21.33

35.91

1.83

32.8

0.167

4.14e−16

0.00

 

SD

9.23

9.46

9.49

2.65

6.49

0.397

1.45e−15

0.00

 

Rank

6

5

8

4

7

3

2

1

\(\, f_{9} (x)\)

Best

1.85e−14

7.69e−15

2.06e−10

8.52e−15

1.14e−14

2.01e−12

1.11e−14

3.55e015

 

SD

4.80e−15

9.33e−16

9.45e−10

1.79e−15

3.48e−15

9.22e−13

3.55e−15

0.00e+000

 

Rank

6

2

8

3

5

7

4

1

\(\, f_{10} (x)\)

Best

1.10e−2

9.04e−4

1.07e−2

1.31e−2

1.31e−2

6.45e−13

1.67e−2

0.00

 

SD

1.60e−2

2.78e−3

1.14e−2

1.73e−2

1.35e−2

2.07e−12

2.41e−2

0.00

 

Rank

5

3

4

7

6

2

8

1

\(\, f_{12} (x)\)

Best

2.18e−30

1.22e−31

7.07e−30

2.05e32

3.46e−3

1.59e−21

3.76e−31

9.72e−002

 

SD

5.14e−30

4.85e−32

4.05e−30

8.12e33

1.89e−2

1.93e−21

1.2e−30

1.97e−002

 

Rank

4

2

5

1

7

6

3

8

 

AR

5.5

3.7

4.9

3.9

5.1

4.8

2.9

2.4

 

FR

8

3

6

4

7

5

2

1

From Table 3, the data of Rank demonstrates that PSO-API algorithm obtains best results on f 1(x), f 2(x), f 3(x), f 5(x), f 7(x), f 8(x), f 9(x), f 10(x) and performs worst on f 6(x) and f 12(x). Table 3 also shows FR obtained by PSO-API algorithms is better than that obtained by other eight PSO variants. It can be concluded that PSO-API algorithm has the highest comprehensive performance among them. Consequently, the comparisons indicate that PSO-API algorithm has the best overall performance over several existing PSO variants and is an effective method for solving a variety of optimization problems.

Time complexity of algorithm also should be considered and a computational experiment of six PSO variants [PSO-cf (Kennedy and Mendes 2002), FIPS (Mendes et al. 2004), DMS-PSO (Liang and Suganthan 2005), CLPSO (Liang et al. 2006), LPSO (Shi and Eberhart 1998c) and PSO-API] is performed over 20 independent runs and the execution times of these algorithms are compared. In the experiment, parameters sittings of these algorithms are the same as Table 2. The population, dimension and generations are 20, 30 and 3000, respectively. Table 4 lists CPU times (in seconds) of six PSO algorithms. In Table 4, ‘AV(CPU)’ and ‘Rank’ stand for the average CPU time over 20 runs and the ascending order of each ‘AV(CPU)’, respectively. ‘AR’ and ‘FR’ stand for the average rank of Rank and the ascending order of AR, respectively.
Table 4

Computational time of six PSO algorithms

Function

PSO-cf

FIPS

DMS-PSO

CLPSO

LPSO

PSO-API

\(\, f_{1} (x)\)

AV(CPU)/Rank

6.09e−001/1

4.19e+000/6

4.04e+000/5

3.60e+000/4

7.59e−001/2

1.28e+000/3

\(\, f_{2} (x)\)

AV(CPU)/Rank

1.76e+000/3

4.10e+000/5

4.85e+000/6

3.55e+000/4

9.99e−001/1

1.51e+000/2

\(\, f_{3} (x)\)

AV(CPU)/Rank

1.01e+001/2

1.16e+001/3

1.28e+001/4

9.95e+000/1

1.44e+001/5

1.61e+001/6

\(\, f_{4} (x)\)

AV(CPU)/Rank

3.06e+000/3

4.19e+000/5

5.50e+000/6

3.67e+000/4

1.13e+000/1

1.66e+000/2

\(\, f_{5} (x)\)

AV(CPU)/Rank

3.31e+000/3

4.12e+000/6

4.11e+000/5

3.78e+000/4

8.22e−001/1

1.27e+000/2

\(\, f_{6} (x)\)

AV(CPU)/Rank

5.78e+000/1

7.04e+000/4

8.51e+000/6

6.79e+000/3

6.51e+000/2

7.35e+000/5

\(\, f_{7} (x)\)

AV(CPU)/Rank

3.17e+000/3

4.49e+000/5

4.56e+000/6

3.98e+000/4

1.03e+000/1

1.46e+000/2

\(\, f_{8} (x)\)

AV(CPU)/Rank

5.05e+000/2

6.59e+000/5

6.96e+000/6

5.89e+000/4

4.72e+000/1

5.31e+000/3

\(\, f_{9} (x)\)

AV(CPU)/Rank

4.83e+000/3

6.31e+000/5

7.23e+000/6

5.66e+000/4

3.18e+000/1

3.89e+000/2

\(\, f_{10} (x)\)

AV(CPU)/Rank

4.21e+000/1

6.53e+000/5

6.83e+000/6

6.16e+000/4

4.74e+000/2

5.20e+000/3

\(\, f_{11} (x)\)

AV(CPU)/Rank

5.09e+001/2

5.17e+001/3

6.52e+001/4

4.78e+001/1

9.52e+001/5

9.80e+001/6

\(\, f_{12} (x)\)

AV(CPU)/Rank

6.19e+000/1

1.22e+001/3

1.25e+001/4

1.12e+001/2

1.80e+001/6

1.75e+001/5

\(\, f_{13} (x)\)

AV(CPU)/Rank

4.52e−002/1

3.98e+000/6

3.91e+000/5

3.51e+000/4

1.07e+000/2

1.46e+000/3

\(\, f_{14} (x)\)

AV(CPU)/Rank

3.66e+000/3

4.78e+000/5

4.97e+000/6

4.24e+000/4

2.51e+000/1

2.94e+000/2

\(\, f_{15} (x)\)

AV(CPU)/Rank

3.81e+000/3

4.90e+000/5

5.00e+000/6

4.47e+000/4

2.72e+000/1

3.15e+000/2

\(\, f_{16} (x)\)

AV(CPU)/Rank

4.49e+000/2

5.57e+000/5

6.62e+000/6

5.10e+000/4

4.14e+000/1

4.51e+000/3

\(\, f_{17} (x)\)

AV(CPU)/Rank

4.80e+000/3

5.90e+000/5

1.02e+001/6

4.67e+000/1

4.79e+000/2

5.35e+000/4

\(\, f_{18} (x)\)

AV(CPU)/Rank

3.12e−003/1

4.60e+000/5

5.70e+000/6

3.67e+000/4

2.19e+000/2

2.58e+000/3

\(\, f_{19} (x)\)

AV(CPU)/Rank

1.56e−003/1

5.84e+000/5

6.25e+000/6

4.35e+000/3

4.13e+000/2

4.57e+000/4

\(\, f_{20} (x)\)

AV(CPU)/Rank

2.65e+001/2

2.76e+001/3

3.25e+001/4

2.00e+001/1

4.99e+001/6

4.95e+001/5

 

AR

2.05

4.7

5.45

3.2

2.25

3.35

 

FR

1

5

6

3

2

4

From Rank of LPSO and PSO-API algorithm, we can conclude that our proposed policy adding to the original PSO increases the computational time. In Table 4, AR reflects comprehensive time-consuming order of the algorithm for twenty benchmarks. From Table 4, the value of AR for PSO-cf and LPSO are smallest among all six algorithms. It illustrates that PSO-cf and LPSO, which are better than our proposed algorithm, have the best CPU time. The value of ‘AR’ for PSO-API algorithm and CLPSO are highly close to each other and it demonstrates that they have similar overall time consumption. The value of ‘AR’ for PSO-cf and LPSO are ‘4.7’ and ‘5.45’, which are both worse than PSO-API algorithm. From the value of ‘FR’, although PSO-API algorithm only ranks four, it is worthy of spending time to improve the accuracy of PSO algorithm. It is clear from the above comparisons of the accuracy and time consumption that PSO-API algorithm has a good overall balance between the performance and time complexity.

Comparisons experiments with similar PSO algorithms

In order to compare with FSS (Carmelo Filho et al. 2008) and CenterPSO (Liu et al. 2007), several experiments are carried out in this section. To compare with FSS algorithm, the experiments settings are as follows: five benchmarks with 30 dimensions are used to assess the algorithms. In detail, Generalized Rosenbrock Function and f 3(x), f 7(x), f 9(x), f 10(x) in section “Test benchmark functions” are introduced. Generalized Rosenbrock Function is denoted as f 21(x), and the expression of f 21(x) is shown as follows.
$$f_{21} (x) = \sum\limits_{i = 1}^{n - 1} {(100(x_{i + 1} - x_{i}^{2} )^{2} + (x_{i} - 1)^{2} )} \quad { (} - 1 0 0\le x_{i} \le 100 )$$
Population size of PSO-API sets as 30. 30 runs are conducted for each problem and each run will perform 1 × 104 generations. To compare with CenterPSO algorithm, the experiments settings are as follows: Three benchmarks f 7(x), f 10(x), f 21(x) with 30 dimensions are used. The generations is 2000. The population has four sizes of 20, 40, 80, 160. Each experiment will perform 100 runs. Average best fitness (Avg. best fitness) and standard deviation (SD) of PSO-API, FSS and CenterPSO are presented in Tables 5 and 6. It’s worth noting that the better results are marked in bold in Tables 5 and 6.
Table 5

Comparison results with FSS algorithm

Name

 

PSO-API

FSS

\(\, f_{3} (x)\)

Avg. best fitness/SD

3.883e−090/9.563e−090

8.080e−002/2.200e−002

\(\, f_{7} (x)\)

Avg. best fitness/SD

0.000e+000/0.000e+000

1.338e+001/4.005e+000

\(\, f_{9} (x)\)

Avg. best fitness/SD

3.789e015/9.013e016

4.000e−002/2.000e−002

\(\, f_{10} (x)\)

Avg. best fitness/SD

0.000e+000/0.000e+000

2.700e−003/2.000e−003

\(\, f_{21} (x)\)

Avg. best fitness/SD

2.635e+001/3.145e001

1.611e+001/7.290e−001

Table 6

Comparison results with CenterPSO algorithm

Name

Size

 

PSO-API

CenterPSO

\(\, f_{7} (x)\)

20

Avg. best fitness/SD

0.000e+000/0.000e+000

3.359e+001/9.562e+000

 

40

Avg. best fitness/SD

0.000e+000/0.000e+000

2.668e+001/7.764e+000

 

80

Avg. best fitness/SD

0.000e+000/0.000e+000

2.276e+001/6.758e+000

 

160

Avg. best fitness/SD

2.020e−010/2.020e−009

2.141e+001/5.949e+000

\(\, f_{10} (x)\)

20

Avg. best fitness/SD

2.311e−004/1.711e−003

1.200e−002/1.650e−002

 

40

Avg. best fitness/SD

7.841e−005/7.841e−004

8.800e−003/1.190e−002

 

80

Avg. best fitness/SD

8.442e−006/8.442e−005

9.300e−003/1.200e−002

 

160

Avg. best fitness/SD

7.308e015/7.151e014

1.200e−002/1.680e−002

\(\, f_{21} (x)\)

20

Avg. best fitness/SD

2.702e+001/3.831e001

1.319e+002/1.358e+002

 

40

Avg. best fitness/SD

2.649e+001/3.276e001

8.717e+001/6.365e+001

 

80

Avg. best fitness/SD

2.626e+001/1.987e001

6.234e+001/5.940e+001

 

160

Avg. best fitness/SD

2.601e+001/2.379e001

4.299e+001/4.499e+001

From the data in Table 5, it can be seen that PSO-API obtains better average best fitness and standard deviation than that obtained by FSS algorithm for all five benchmarks except Generalized Rosenbrock Function. However, for Generalized Rosenbrock Function, PSO-API and FSS algorithm obtain the results with the same order of magnitude. From the Table 6, the results obtained by PSO-API with all population sizes are better than that obtained by CenterPSO algorithm for all three benchmarks. Therefore, statistics analysis indicates the proposed algorithm have better performance than FSS algorithm and CenterPSO algorithm. For most of the benchmarks, all above experiments indicates that PSO-API is a high-performance algorithm.

Conclusions

In this work, to make full use of multi-information characteristics of all personal-best information, an improved PSO algorithm using three positions with all personal-best information has been adopted to enhance the performance. In proposed algorithm, an improved cognition term using the personal-best position, the centroid position and the median position is introduced in velocity update process of PSO. To validate this strategy, a set of benchmark functions including unimodal, multimodal, rotated and shifted benchmark functions with 20, 30 and 50 dimensions have been optimized. Experimental results show that the strategy using multi-information characteristics of all personal-best information is a valid strategy for the purposes of improving the PSO’s performance. Moreover, PSO-API algorithm has also been used to compare with several PSO variants and some similar algorithms of the proposed algorithm. Numerical results show that the PSO-API algorithm has higher precision and satisfied performance. To sum up, the proposed strategy enhances the search ability of PSO and PSO-API algorithm is an efficient PSO variant to obtain promising solution for most of benchmark functions.

Declarations

Authors’ contributions

S.H. carried out the study, collected data, designed the experiments, implemented the simulation, analyzed data and wrote the main manuscript. N.T. provided some intellectual information and revised the manuscript. Y.W. gave technical support and helped to the design of the study. Z.J. made the general supervision of the research. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests in this study.

Funding

In this work, the design of the study and collection, analysis, and interpretation of data are funded by the National High-tech Research and Development Projects of China under Grant No: 2014AA041505 and the writing of the manuscript is funded by the National Natural Science Foundation of China under Grant No: 61572238 and by the Provincial Outstanding Youth Foundation of Jiangsu Province under Grant No: BK20160001.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Internet of Things Engineering, Jiangnan University
(2)
Engineering Research Center of Internet of Things Technology Applications Ministry of Education, Jiangnan University
(3)
Department of Educational Technology, Jiangnan University

References

  1. Beheshti Z, Shamsuddin SMH, Hasan S (2013) MPSO: median-oriented particle swarm optimization. Appl Math Comput 219(11):5817–5836MathSciNetView ArticleMATHGoogle Scholar
  2. Beielstein T, Parsopoulos KE, Vrahatis MN (2002) Tuning PSO parameters through sensitivity analysis. Universität DortmundGoogle Scholar
  3. Bonyadi MR, Li X, Michalewicz Z (2014) A hybrid particle swarm with a time-adaptive topology for constrained optimization. Swarm Evol Comput 18:22–37View ArticleGoogle Scholar
  4. Carmelo Filho JA, De Lima Neto FB, Lins AJCC et al (2008) A novel search algorithm based on fish school behavior. In: Proceedings of the 2008 IEEE international conference on systems, man and cybernetics, pp 2646–2651Google Scholar
  5. Cheng R, Jin Y (2015) A social learning particle swarm optimization algorithm for scalable optimization. Inf Sci 291:43–60MathSciNetView ArticleGoogle Scholar
  6. Deep K, Thakur M (2007) A new crossover operator for real coded genetic algorithms. Appl Math Comput 188(1):895–911MathSciNetView ArticleMATHGoogle Scholar
  7. Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, vol 1, pp 39–43Google Scholar
  8. Haklı H, Uğuz H (2014) A novel particle swarm optimization algorithm with Levy flight. Appl Soft Comput 23:333–345View ArticleGoogle Scholar
  9. Hu W, Yen GG (2015) Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Trans Evol Comput 19(1):1–18View ArticleGoogle Scholar
  10. Kennedy ER (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Perth, Australia, Piscat-away, vol 4, 1942–1948Google Scholar
  11. Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of the 2002 congress on evolutionary computation, vol 2, pp 1671–1676Google Scholar
  12. Kennedy J, Mendes R (2006) Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Trans Syst Man Cybern C Appl Rev 36(4):515–519View ArticleGoogle Scholar
  13. Li Y, Zhan ZH, Lin S et al (2015) Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems. Inf Sci 293:370–382ADSView ArticleGoogle Scholar
  14. Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer. In: Proceedings of the 2005 congress on swarm intelligence symposium, vol 8237, pp 124–129Google Scholar
  15. Liang JJ, Qin AK, Suganthan PN et al (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295View ArticleGoogle Scholar
  16. Lim WH, Isa NAM (2014a) An adaptive two-layer particle swarm optimization with elitist learning strategy. Inf Sci 273:49–72ADSMathSciNetView ArticleGoogle Scholar
  17. Lim WH, Isa NAM (2014b) Particle swarm optimization with adaptive time-varying topology connectivity. Appl Soft Comput 24:623–642View ArticleGoogle Scholar
  18. Lim WH, Isa NAM (2014c) Particle swarm optimization with increasing topology connectivity. Eng Appl Artif Intell 27:80–102View ArticleGoogle Scholar
  19. Lim WH, Isa NAM (2014d) Bidirectional teaching and peer-learning particle swarm optimization. Inf Sci 280:111–134ADSView ArticleGoogle Scholar
  20. Lim WH, Isa NAM (2014e) Teaching and peer-learning particle swarm optimization. Appl Soft Comput 18:39–58View ArticleGoogle Scholar
  21. Liu Y, Qin Z, Shi Z et al (2007) Center particle swarm optimization. Neurocomputing 70(4):672–679View ArticleGoogle Scholar
  22. Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210View ArticleGoogle Scholar
  23. Qin Q, Cheng S, Zhang Q et al (2015) Multiple strategies based orthogonal design particle swarm optimizer for numerical optimization. Comput Oper Res 60:91–110MathSciNetView ArticleGoogle Scholar
  24. Rao RV, Patel V (2013) An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Scientia Iranica 20(3):710–720MathSciNetGoogle Scholar
  25. Ratnaweera A, Halgamuge SK, Watson HC (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255View ArticleGoogle Scholar
  26. Shi Y, Eberhart RC (1998) Parameter selection in particle swarm optimization. Evolutionary programming VII, vol 1447. Springer, Berlin, pp 591–600Google Scholar
  27. Shi Y, Eberhart R (1998b) A modified particle swarm optimizer. In: Proceedings of the 1998 IEEE international conference on evolutionary computation, vol 6, pp 69–73Google Scholar
  28. Shi Y, Eberhart R (1998c) A modified particle swarm optimizer. In: IEEE international conference on evolutionary computation, the 1998 IEEE international conference on computational intelligence, pp 69–73Google Scholar
  29. Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. Proc IEEE Congr Evol Comput 3:1945–1950Google Scholar
  30. Suganthan PN, Hansen N, Liang JJ et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. In: Proceedings of IEEE congress on evolutionary computation, pp 1–50Google Scholar
  31. Sun S, Li J (2014) A two-swarm cooperative particle swarms optimization. Swarm Evol Comput 15:1–18View ArticleGoogle Scholar
  32. Wang L, Yang B, Chen Y (2014) Improving particle swarm optimization using multi-layer searching strategy. Inf Sci 274:70–94View ArticleGoogle Scholar
  33. Yadav A, Deep K (2014) An efficient co-swarm particle swarm optimization for non-linear constrained optimization. J Comput Sci 5(2):258–268View ArticleGoogle Scholar
  34. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102View ArticleGoogle Scholar
  35. Zhan ZH, Zhang J, Li Y et al (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 39(6):1362–1381MathSciNetView ArticlePubMedGoogle Scholar
  36. Zhang W, Ma D, Wei J et al (2014) A parameter selection strategy for particle swarm optimization based on particle positions. Exp Syst Appl 41(7):3576–3584View ArticleGoogle Scholar

Copyright

© The Author(s) 2016