- Research
- Open access
- Published:
Modified differential evolution algorithm with onlooker bee operator for mixed discrete-continuous optimization
SpringerPlus volume 5, Article number: 1914 (2016)
Abstract
For solving non-linear programming problems containing discrete and continuous variables, this article suggests two modified algorithms based on differential evolution (DE). The two proposed algorithms incorporate a novel random search strategy into DE/best/1 and DE/cur-to-best/1 respectively. Inspired by the artificial bee colony algorithm, the random search strategy overcomes the searching unbalance of DE/best/1 and DE/cur-to-best/1 by enhancing the global exploration capability of promising individuals. Two numerical experiments are given to test the two modified algorithms. Experiment 1 is conducted on the benchmark function set of CEC2005 in order to verify the effectiveness of the improved strategy. Experiment 2 is designed to optimize two mixed discrete-continuous problems to illustrate the competitiveness and the practicality of the proposed algorithms. In particular, the modified DE/cur-to-best/1 finds the new optima of two engineering optimization problems.
Background
Among the most commonly used stochastic algorithms, the differential evolution algorithm (DE) Storn and Price (1995) proposed by Storn and Price in 1995 has been identified as one of the most powerful optimizers. DE is the only such algorithm that has secured competitive ranking in all optimization competitions at IEEE International Conferences on Evolutionary Computation (CEC) (Das and Suganthan 2011; Elsayed et al. 2011; LaTorre et al. 2011) since 1996. The competitiveness of DE is also supported by many comparative studies (Civicioglu and Besdok 2013; Wang et al. 2011; Montes and MirandaVarela 2010; Vesterstrom and Thomsen 2004). However, there remains a shortfall in the search balance in the two mutation strategies, i.e., DE/best/1 and DE/cur-to-best/1, which are good at exploitation and poor at exploration. This often causes stagnation during the solution of complex problems.
Artificial bee colony (ABC), developed by Karaboga (2005), is a novel heuristic algorithm inspired by the foraging behavior of honey bee swarms. In ABC, a food source position represents a possible solution of the optimization problem and the amount of each food source represents its fitness. There is only one bee for each food source. The colony is classified into three groups depending on their duties: employed bees, onlooker bees and scouts. The number of employed and onlooker bees is equal to one half of the population size. Based on the information shared by employed bees, onlooker bees select different food sources at different probabilities and explore their neighborhood. Some numerical comparisons (Civicioglu and Besdok 2013; Karaboga and Basturk 2007, 2008, 2009) have demonstrated that the performance of the ABC algorithm is competitive with other population-based algorithms, and it has the advantage of employing fewer control parameters.
The evolutionary operators of the DE and ABC are similar and, in some ways, complementary. Some recent studies combining the two algorithms have been proposed to benefit from their advantages and overcome their drawbacks. Yang et al. (2013) proposed a hybrid ABC-DE algorithm, in which employed bees use the mutation and crossover strategies of DE to enforce their exploration ability, while onlooker bees keep their original updating strategy to retain the exploitation capability. Gao et al. (2012) proposed a modified ABC algorithm, which is based on the fact that each bee searches only around the best solution of the previous iteration. Gao (2013) gave an accelerated ABC algorithm based on DE for solving the Van der pol-Duffing oscillator problem. Álvaro et al. (2012) developed a multi-objective ABC/DE algorithm by combining the collective intelligence of the honey bee swarms with the properties of the DE algorithm. Many other successful combinations (Gao and Liu 2011; Li and Yin 2012; Li et al. 2013) of the two algorithms have also demonstrated complementarity of the operators in the searching ability.
In order to enhance the exploration ability of the DE/best/1 and DE/cur-to-best/1 mutation strategies, the present study proposes two modified DE algorithms with an onlooker bee operator, called mDEOB (i.e., mDEOB/best/1 and mDEOB/cur-to-best/1). Inspired by ABC, the two mDEOB algorithms run the classical mutation and crossover operators of DE followed by a random search guided by an onlooker bee operator. The random search enhances the ability to explore promising individuals. Two numerical experiments were conducted on the benchmark function set of CEC2005 and a class of engineering design problems. Statistical analyses and comparative analyses were performed on the results of the two experiments.
The rest of this paper is structured as follows: “Classical differential evolution” section briefly introduces the classical DE algorithm; “Modified differential evolution algorithm with onlooker bee operator” section presents and analyzes the proposed mDEOB algorithms; numerical experiments and analyses are then presented in “Numerical experiment” section, followed by conclusions in “Conclusion” section.
Classical differential evolution
DE is often used for dealing with the continuous optimization problem. This paper supposes that the objective function to be minimized is \(f({\vec {x}}),\ {\vec {x}}=(x_{1},\ldots ,x_{D})\in \mathfrak {R}^{D}\), and the feasible solution space is \(\Psi =\prod _{j=1}^{j=D}[L_{j},U_{j}]\). The classical DE (Hu et al. 2008, 2013, 2014, 2016; Su and Hu 2013) works through a simple cycle of operators including mutation, crossover and selection operator after initialization. The classical DE procedures are described in detail as follows.
Initialization
The first step of DE is the initialization of a population with N D-dimensional potential solutions (individuals) over the optimization search space. We shall symbolize each individual by \({\vec {x}}_{i}^{g}=(x_{i,1}^{g},x_{i,2}^{g},\ldots ,x_{i,D}^{g}),\) for \(i=1,\ldots ,N,\) where \(g=0,1,\ldots ,g_{max}\) is the current generation and \(g_{max}\) is the maximum number of generations. For the first generation (g = 0), the population should be sufficiently scaled to cover the optimization search space as much as possible. Initialization is implemented by using a uniformly sampling to generate the potential individuals in the optimization search space. We can initialize the jth dimension of the ith individual according to
where rand(0, 1) is a uniformly distributed random number confined on the range [0,1].
Mutation operator
After initialization, DE creates a donor vector \({\vec {v}}_{i}^{g}\) corresponding to each individual \({\vec {x}}_{i}^{g}\) in the gth generation through the mutation operator. This article is interested in the following two operators:
DE/best/1:
DE/current-to-best/1:
where \({\vec {x}}_{best}^{g}\) denotes the best individual of the current generation, the indices \(r_{1},r_{2}\in S_{r}=\{1,2,\ldots ,N\} \backslash\) {i} are uniformly random integers, mutually different and distinct from the sequential index i, and \(F\in (0,1]\) is a real parameter, called mutation or scaling factor.
If the element values of the donor vector \({\vec {v}}_{i}\) exceed the pre-specified upper bound or lower bound, we can change the element values by the periodic mode rule as follow:
Crossover operator
Following mutation, the crossover operator is applied to further increasing the diversity of the population. In crossover, a trial vector, \({\vec {u}}_{i}^{g},\) is generated by the binomial crossover, which combines the elements of the target vectors, \({\vec {x}}_{i}^{g},\) and the donor vector, \({\vec {v}}_{i}^{g}.\)
where \(CR\in (0,1)\) is the probability of crossover, \(j_{rand}\) is a random integer on [1, D].
Selection operator
Finally, the selection operator is employed to maintain the most promising trial individuals in the next generation. The classical DE adopts a simple selection scheme. It compares the objective value of the target \({\vec {x}}_{i}^{g}\) with that of the trial individual \({\vec {u}}_{i}^{g}\). If the trial individual reduces the value of the objective function, it is accepted for the next generation; otherwise the target individual is retained in the population. The selection operator is defined as
Modified differential evolution algorithm with onlooker bee operator
DE/best/1 and DE/cur-to-best/1 algorithms find the global optimum of simple optimization problems rapidly (e.g., low-dimensional convex optimization problems); however, both algorithms may easily become trapped in the local optima when solving complex multimodal problems, for the reason that exploration and exploitation capabilities are both necessary for a population-based optimizer. In fact, the exploration and exploitation requirements are mutually contradictory. In order to achieve a good performance, the two capabilities should be well balanced, but the solution search Eqs. (1) and (2), which are used to generate new candidate solutions, are based on the information of the current best solution. Thus the search ability of the algorithms is good at exploitation but poor at exploration.
In order to promote the balance of the exploitation and exploration capabilities, the present study adapts the ABC algorithm and proposes two modified differential evolution algorithms with an onlooker bee operator, called mDEOB (i.e., mDEOB/best/1 and mDEOB/cur-to-best/1). The mDEOB algorithms enhance DE’s exploration ability by adding some random searches around the promising individuals under the guidance of onlooker bees. The mDEOB algorithms work incorporate the two-stage cycle shown in Fig. 1, the first stage is the classical DE phase (including DE mutation, crossover and selection operators). The second stage is the onlooker bee phase inspired by the ABC algorithm. The implementation process of the onlooker bee phase is detailed below.
Algorithmic flowchart
In the ABC algorithm, the artificial bee colony consists of three groups of bees: employed bees, onlookers and scout bees. Each employed bee exploits a food source, bring the information about the food source back to the hive and shares the information with onlooker bees waiting in the hive for this information. Each food source is a candidate solution of the problem. The amount of nectar in a food source represents the quality of the solution represented by the fitness value. An onlooker bee chooses a food source (candidate individual) depending on the probability value \(P_i\) associated with the amount of nectar(fitness). The probability \(P_i\) of the individual \({\vec {x}}_i\) is formulated as follows:
Note that the fitness should be changed appropriately when solving minimization problems. Here N denotes the number of food sources (population size).
Unlike the original ABC algorithm, the onlooker bees in mDEOB use the following Eq. (6) to generate new candidate solutions.
Obviously, Eq. (6) brings to the selected individual \({\vec {x}}_{i}^{g}\) a perturbation whose center is its own value and whose radius is the difference between two randomly selected individuals. The greedy selection operator [Eq. (4)] is used to decide whether \({\vec {x}}_{i}^{g}\) or \({\vec {y}}_{i}^{g}\) will survive to the population generated next. In each mDEOB cycle, N (population size) onlooker bees are sent to choose individuals in roulette wheel fashion: the greater the individual’s fitness, the greater its perturbation chance.
Algorithmic analyses
Figure 2 illustrates how onlooker bees change the search process of the algorithms. Figure 2 shows the population distributions at various stages of DE/best/1 and mDEOB/best/1 when solving the third function of CEC2005.
In Fig. 2, it is readily seen that the population distributions have the following characteristics:
-
For the same function evaluations (FEs), i.e., sub_figures (a) versus (d), (b) versus (e), (c) versus (f), the sub_figures (d), (e), (f) associated with mDEOB more diverse than those associated with DE.
-
In the sub_figures (e) and (f), the candidate solutions are located on both sides of the global optimum, whereas they are located on one side in sub_figures (b) and (c).It is well known that the case where candidate solutions are distributed about the global optimum is more conducive to DE search. Thus, with the help of the onlooker bees, the mDEOB generates better more useful population distributions than DE alone.
-
In sub_figure (e), many candidate solutions cover the global optimum and the population maintains better diversity; DE/best/1 (shown in sub_figure (c)) fails to do so. The figures illustrate the process by which this population diversity expedites more accurate solutions than is achievable using DE.
In summary, the numerical experiment results indicate that the modified strategy enhances the diversity of evolving populations, leading to improved of global searching in the solution space. This mitigates to some extent the disadvantages caused by the search imbalance in the DE/best/1 and DE/cur-to-best/1 algorithm.
Set parameters: population size \(N = 30\), problem dimension \(D = 2\), mutation factor \(F = 0.5\), crossover probability \(CR = 0.9\). The global optimum of the third function (2-dimension) of CEC2005 is \({\vec {x}} = (-32.2013,\ 64.9776)\) with the function value \(f({\vec {x}}) = 0\).
Numerical experiment
In order to test mDEOB/best/1 and mDEOB/cur-to-best/1 and show their performance, two numerical experiments are given in this section. One is conducted on the benchmark function set of CEC2005, the other is a group of application-oriented numerical examples related to two mixed discrete-continuous optimizations.
Experiment 1: test on CEC2005
This subsection deals with the following tasks:
-
Two comparative experiments on the benchmark function set of CEC2005 are conducted for mDEOB/best/1 versus DE/best/1 and mDEOB/cur-to-best/1 versus DE/cur-to-best/1.
-
Sign Tests on the experimental results are used to demonstrate the advantage of mDEOB algorithms.
-
Convergence figures on the first 14 benchmark functions, which include all functions except for 11 hybrid composition functions, are given to show the difference of convergence speed on the differential algorithms.
Designing experiments
The numerical experiments are conducted on 25 test instances proposed in the CEC2005 special session on real-parameter optimization Suganthan et al. (2005). The benchmark function set can be divided into four classes:
-
5 unimodal functions f1–f5;
-
7 basic multimodal functions f6–f12;
-
2 expanded multimodal functions f13–f14;
-
11 hybrid composition functions f15–f25.
The number of decision variables, D, is set to be 10 for all the 25 benchmark functions. The population size, N, is set to be 60 for all the algorithms. The mutation factor, F, is set to be 0.5 while the crossover probability, CR, is set to be 0.9. For each algorithm and each test function, 25 independent runs are conducted with 150000 function evaluations (FEs) as the termination criterion.
Statistical analysis of experimental results
According to Suganthan et al. (2005), Table 1 reports seven results of 25 independent runs on each function by DE/best/1 and mDEOB/best/1: the minimal error of 25 runs, the 7th error, the 13th error, the 19th error, the maximal error, the average error (mean) and the standard deviation (std.) of 25 runs, in turn. In the “compare” row, simple comparison analyses are given. The priority of the comparison analyses is the best solution, the mean and the standard deviation in turn. Table 2 reports the similar results for DE/cur-to-best/1 and mDEOB/cur-to-best/1. From Table 1, we can see that mDEOB/best/1 outperforms the DE/best/1 on the 5 unimodal functions and 9 multimodal functions. Especially, as shown in Table 2, mDEOB/cur-to-best/1 outperforms the DE/cur-to-best/1 on all unimodal functions and all hybrid composition functions except for f17. mDEOB/cur-to-best/1 is superior to DE/cur-to-best/1 on the 5 functions (i.e., f6, f7, f10, f11 and f12) among the other 9 multimodal functions.
Sign Test Derrac et al. (2011) is a popular statistical method to compare the performances of algorithms. As we all known, the average error (mean) and the best value are two most important factors for the performances of algorithms. So this paper uses Sign Test method to analyze the average errors and the best values, which are shown in Tables 1 and 2. Here the null hypothesis is that the performances of the two algorithms are not significantly differential, while the alternative hypothesis is that the performances are clearly differential. As shown in Table 3, for the results in Table 1, the probability value of supporting the null hypothesis of Sign Test on the average errors equals 1.000, but the probability on the best values is 0.021, which is less than the significance level 0.05. That is to say, we cannot reject the null hypothesis according to the average errors, but we can reject the null hypothesis according to the best values. This indicates that (1) judging by the average errors, the performances of the two algorithms are not significantly differential, but (2) judging by the best values, the performances of the two algorithms are obviously differential. So the overall performance of mDEOB/best/1 algorithm is obviously differential with the other. Combining with the front “compare” rows in Table 1, we can then draw a conclusion that the overall performance of mDEOB/best/1 is better than DE/best/1. For the results in Table 2, the probability values of supporting the null hypothesis of Sign Test on the average errors and on the best values are equal to 0.004 and 0.167 respectively. In the similar way, we may draw a conclusion that mDEOB/cur-to-best/1 outperforms DE/cur-to-best/1.
Figure 3 shows the evolution landscapes of the average error of the best function values on 25 running derived from all the four algorithms on all unimodal functions and all basic multimodal functions (i.e., f1–f12). It is not difficult to find the overall superiority of mDEOB algorithms at the convergence speed.
In summary: we can get that onlooker bees strategy has positive effect on the performance of DE/best/1 and DE/cur-to-best/1.
Experiment 2: mixed discrete-continuous
In order to further illustrate the capabilities of the proposed algorithms, two mixed discrete-continuous optimization problems (i.e., Design of a coil spring and a speed reducer) are optimized here.
Design of a coil spring
The design of a coil spring Sandgren (1990), Shen et al. (2009) is a nonlinear engineering design optimization problem, which is designed to minimize the the volume of spring steel wire used to manufacture the spring (minimum weight). As shown in Fig. 4, the spring is to be a helical compression spring. The designing parameters are the number of spring coils, \(x_1\), the outside diameter of the spring, \(x_2\), and the spring wire diameter, \(x_3\). Let \(f(x_1,x_2,x_3)\) denote the objective function. The problem is formulated as follows:
Here the above formula includes nine constants: \(F_{\text {max}}=1000.0\), \(S=189{,}000.0\), \(l_{\text {max}}=14.0\), \(d_{\text {min}}=0.2\), \(D_{\text {max}}=3.0\), \(F_{\text {p}}=300.0\), \(\sigma _{\text {pm}}=6.0\), \(\sigma _{\text {w}}=1.25\), \(G=11.5\times 10^6\). \(x_1\) is an integer variable, \(x_2\) is a continuous variable and \(x_3\) may take on only discrete variables according to the available standard of the spring steel wire diameters. The detailed explanation about the coil spring design can be found in reference Sandgren (1990), Lampinen and Zelinka (1999).
As reference Lampinen and Zelinka (1999), the two proposed mDEOB algorithms employ the soft-constraint (penalty) approach to handle the constraint functions, and employ the INT() function to handle the integer variable. The algorithmic parameters are set as follows, \(D=3\), \(N=40\), \(F=0.9\) and \(CR=0.8\). 100 independent runs are conducted with 2650 times iterations as the termination criterion.
As shown in Table 4, the solutions of the coil spring problem are reported and compared with the results obtained by other researchers. From the table, we can draw conclusions as follows:
-
Firstly, the proposed algorithms, mDEOB/best/1 and mDEOB/cur_to_best/1, can find the minimal objective value obtained in literatures. The best results obtained by other researchers in Table 4 is 2.65856. mDEOB/best/1 and mDEOB/cur_to_best/1 can also find the optimal solution with the average CPU times of 0.02578s and 0.02322s in 100 independent runs.
-
Secondly, mDEOB/cur_to_best/1 finds another optimal solution. The find provides another designing strategy of the coil spring. From Table 4, the optimal solution reported by Lampinen et.al. is (9, 1.223041, 0.238) of \((x_1, x_2, x_3)\). DE/best, mDEOB/best/1 and DE/cur_to_best/1 algorithms find the same optimal solution, while mDEOB/cur_to_best/1 finds another optimal solution, (10, 1.18104, 0.283).
-
Thirdly, the percentage of successfully achieving the optimal value in multiple runs demonstrates that the modified strategy of mDEOB is effective. The last row of Table 4 reports the percentage in 100 runs of successfully achieving the optimal value. The percentages of DE/best and DE/cur_to_best/1 are 69.0, 90.0% respectively, while those of the proposed mDEOB/best/1 and mDEOB/cur_to_best/1 are 88.0, 95.0%. This indicates that the robustness of mDEOB/best/1 and mDEOB/cur_to_best/1 algorithms are better than the corresponding DE algorithms. This improvement of algorithmic robustness could be only due to employing the onlooker bee operator.
Design of a speed reducer
Speed reducer design problem is a mixed programming problem containing one integer variable (i.e. the third variable \(x_3\)) and six continuous variables \(x_i, i=1,2,\ldots ,7\), \((i\ne 3)\). The physical meaning of these variables can be seen in the reference Sadollah et al. (2013). There are eleven constraints resulting in the high complexity of the problem. Let \({\mathbf {x}}\) denote a vector \((x_1,x_2,x_3,x_4,x_5,x_6,x_7)\). The problem is formulated as follows:
The reference Sadollah et al. (2013) reported the best results in current literatures, which are the results of six optimization methods including DELC, DEDS, PSO-DE, MDE, HEAA and MBA. We compare the results of the proposed mDEOB/cur_to_best algorithm with these best results. The comparative results are reported in Table 5. From the table, we can see that the proposed mDEOB/cur_to_best algorithm found a solution 2994.468551, which is better than others. This means that the solution of the proposed mDEOB/cur_to_best algorithm is the optimum in current literatures.
In the constraint handling strategy and the handling method of integer variable \(x_3\), the proposed mDEOB/cur_to_best algorithm respectively employs the soft-constraint (penalty) approach and the INT() function in reference Lampinen and Zelinka (1999) . The algorithmic parameters are set as follows, \(D=7\), \(N=50\), \(F=0.9\) and \(CR=0.8\). 100 independent runs are conducted with 2500 times iterations as the termination criterion.
In summary: The improved algorithms, especially mDEOB/cur_to_best algorithm, have strong competitiveness on this kind of complex constrained optimization problems.
All the above algorithms were implemented in Visual C++ and the experiments were conducted on a computer with a Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz and 8GB RAM.
Conclusion
Two new algorithms, mDEOB/best/1 and mDEOB/cur-to-best/1, are proposed to deal with the imbalance between exploration and exploitation capabilities of the DE/best/1 and DE/cur-to-best/1 algorithms. Inspired by the ABC algorithm, these offer improved exploration abilities by employing a random search guided by onlooker bees. Numerical experiments were conducted to test the two proposed algorithm on CEC2005 benchmark functions and two engineering optimization problems. The results on the CEC2005 benchmark functions indicated the effectiveness of the improved strategy. Comparison with other algorithms for the engineering optimization problems showed the competitiveness of the proposed algorithms. In particular, the mDEDE/cur-to-best/1 algorithm found the new optima in both problems.
References
Álvaro R et al (2012) MO-ABCIDE-multiobjective artificial bee colony with differential evolution for unconstrained multiobjective optimization. In: Proceedings of IEEE international syrnposiurn on cornputational intelligence and inforrnatics, (CINTI2012), pp 157–162
Chen JL, Tsao YC (1993) Optimal design of machine elements using genetic algorithms. J Chin Soc Mech Eng 14(2):193–199
Civicioglu P, Besdok E (2013) A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif Intell Rev 39:315–346
Das S, Suganthan P (2011) Differential evolution: a survey of the state-of-the-art. IEEE Trans Evol Comput 1:4–31
Derrac J, García S, Molina D, Herrera F (2011) Practical tutorial on the use of nonparametric statistical tests as amethodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1:3–8
Elsayed SM, Sarker RA, Essam DL (2011) Differential evolution with multiple strategies for solving CEC2011 real-world numerical optimization problems. In: IEEE congress on evolutionary computation (CEC2011), pp 1041–1048
Gao W, Liu S, Huang L (2012) A global best artificial bee colony algorithm for global optimization. J Comput Appl Math 236(11):2741–2753
Gao F et al (2013) Parameter identification for Van Der Pol–Duffing oscillator by a novel artificial bee colony algorithm with differential evolution operators. Appl Math Comput 222:132–144
Gao W, Liu S (2011) Improved artificial bee colony algorithm for global optimization. Inf Process Lett 111:871–882
Hu Z, Xiong S, Su Q, Fang Z (2014) Finite Markov chain analysis of classical differential evolution algorithm. J Comput Appl Math 268:121–134
Hu Z, Su Q, Yang X et al (2016) Not guaranteeing convergence of differential evolution on a class of multimodal functions. Appl Soft Comput 41:479–487
Hu Z, Su Q, Xiong S, Hu F (2008) Self-adaptive hybrid differential evolution with simulated annealing algorithm for numerical optimization. In: IEEE world congress computational intelligence, pp 1189–1194
Hu Z, Xiong S, Su Q, Zhang X (2013) Sufficient conditions for global convergence of differential evolution algorithm. J Appl Math 2013:193196. doi:10.1155/2013/193196
Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Erciyes University, Technical Report-TR06. Kayseri, Turkey
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim 39:459–471
Karaboga D, Basturk B (2008) On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 8:687–697
Karaboga D, Basturk B (2009) A comparative study of artificial bee colony algorithm. Appl Math Comput 214:108–132
Lampinen J, Zelinka I (1999) Mixed integer-discrete-continuous optimization by differential evolution. In: Proceedings of 5th international Mendel conference on soft computing, (MENDEL1999), pp 77–81
LaTorre A, Muelas S, Peña JM (2011) Benchmarking a hybrid DE-RHC algorithm on real world problems. In: IEEE congress on evolutionary computation (CEC2011), pp 1027–1033
Li Y, Wang Y, Li B (2013) A hybrid artificial bee colony assisted differential evolution algorithm for optimal reactive power flow. Electr Power Energy Syst 52:25–33
Li X, Yin M (2012) Hybrid differential evolution with artificial bee colony and its application for design of a reconfigurable antenna array with discrete phase shifters. IET Microw Antennas Propag 6(14):1573–1582
Montes EM, MirandaVarela ME, del Carmen GmezRamm R (2010) Differential evolutoin in constrained numerical optimization: an empirical study. Inf Sci 180:4223–4262
Sadollah A, Bahreininejad A, Eskandar H et al (2013) Mine blast algorithm: a new population based algorithm for solving engineering optimization problems. Appl Soft Comput 13:2592–2612
Sandgren E (1990) Nonlinear integer and discrete programming in mechanical design optimization. J Mech Des Trans ASME 112(2):223–229
Shen H, Zhu Y, Niu B et al (2009) An improved group search optimizer for mechanical design optimization problems. Prog Nat Sci 19(1):91–97
Storn R, Price K (1995) Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous Spaces. Technical report, International Computer Science Institute
Suganthan PN, Hansen N, Liang JJ, Deb K, Chen YP, Auger A, Tiwari S (2005) Problem definition and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Nanyang Technology University Singapore, Technical Report, May 2005
Su Q, Hu Z (2013) Color image quantization algorithm based on self-adaptive differential evolution. Comput Intell Neurosci 2013:231916. doi:10.1155/2013/231916
Vesterstrom J, Thomsen R (2004) A comparative study of differential evolution, particle swarm optimization and evolutionary algorithm on numerical benchmark problems. Congr Evol Comput 2:1980–1987
Wang Y, Cai Z, Zhang Q (2011) Differetial evolution with composite trial vector generation strategies and control parameters. IEEE Trans Evol Comput 1:55–66
Wu S, Chow P (1995) Genetic algorithms for nonlinear mixed discrete-integer optimization problems via meta-genetic parameter optimization. Eng Optim 24(2):137–159
Yang J, Li W, Shi X, Xin L, Yu J (2013) A hybrid ABC-DE algorithm and its application for time-modulated arrays pattern synthesis. IEEE Trans Antennas Propag 11:5485–5495
Authors' contributions
All authors conceived the study, participated in its design and coordination. In addition, YM did much work for the later revision of this paper. QS drafted the manuscript. ZH performed the majority of the experimental work. XX carried out the manuscript editing. All authors read and approved the final manuscript.
Acknowledgements
This work was supported in part by the Key Project of Hubei Provincial Department of Education (D20161306).
Competing interests
The authors declare that they have no competing interests.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Miao, Y., Su, Q., Hu, Z. et al. Modified differential evolution algorithm with onlooker bee operator for mixed discrete-continuous optimization. SpringerPlus 5, 1914 (2016). https://doi.org/10.1186/s40064-016-3560-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40064-016-3560-z