- Research
- Open Access
A teaching learning based optimization based on orthogonal design for solving global optimization problems
- Suresh Chandra Satapathy^{1}Email author,
- Anima Naik^{2} and
- K Parvathi^{3}
https://doi.org/10.1186/2193-1801-2-130
© Satapathy et al.; licensee Springer. 2013
- Received: 28 November 2012
- Accepted: 14 March 2013
- Published: 23 March 2013
Abstract
In searching for optimal solutions, teaching learning based optimization (TLBO) (Rao et al. 2011a; Rao et al. 2012; Rao & Savsani 2012a) algorithms, has been shown powerful. This paper presents an, improved version of TLBO algorithm based on orthogonal design, and we call it OTLBO (Orthogonal Teaching Learning Based Optimization). OTLBO makes TLBO faster and more robust. It uses orthogonal design and generates an optimal offspring by a statistical optimal method. A new selection strategy is applied to decrease the number of generations and make the algorithm converge faster. We evaluate OTLBO to solve some benchmark function optimization problems with a large number of local minima. Simulations indicate that OTLBO is able to find the near-optimal solutions in all cases. Compared to other state-of-the-art evolutionary algorithms, OTLBO performs significantly better in terms of the quality, speed, and stability of the final solutions.
Keywords
- TLBO
- Global function Optimization
- Orthogonal design
- Convergence speed
Introduction
Teaching-Learning based Optimization (TLBO) algorithm is a global optimization method originally developed by Rao et al. (Rao et al. 2011a; Rao et al. 2012; Rao & Savsani 2012a). It is a population- based iterative learning algorithm that exhibits some common characteristics with other evolutionary computation (EC) algorithms (Fogel 1995). However, TLBO searches for an optimum through each learner trying to achieve the experience of the teacher, which is treated as the most learned person in the society, thereby obtaining the optimum results, rather than through learners undergoing genetic operations like selection, crossover, and mutation (Shi & Eberhart 1998). Due to its simple concept and high efficiency, TLBO has become a very attractive optimization technique and has been successfully applied to many real world problems (Rao et al. 2011a; Rao et al. 2012; Rao & Savsani 2012a), (Rao et al. 2011b; Rao & Patel 2012; Rao & Savsani 2012b; Vedat 2012; Rao & Kalyankar 2012; Suresh Chandra & Anima 2011).
In any evolutionary algorithms the convergence rate is given prime importance for solving an optimization problem over quality of solutions. TLBO in general produces improved results in compared to other EC techniques like Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), and Artificial Bee Colony (ABC)(Suresh Chandra et al. 2012). However, in real- world real time applications, the major thrust is always on convergence time. To make TLBO suitable for such applications, this work focuses on improving the convergence time while without compromising the quality of results.
In our proposed work, the attempt is made to include orthogonal design method in the basic TLBO optimization. In our approach, each learner in the class of learners can be divided into several partial vectors where each of them acts as a factor in the orthogonal design. Orthogonal design is then employed to search the best scales among all the various combinations. Orthogonal design method (Fang & Ma 2001) with both orthogonal array (OA) and factor analysis (such as the statistical optimal method) is developed to sample a small, but representative set of combinations for experimentation to obtain good combinations. OA is a fractional factorial array of numbers arranged in rows and columns, where each row represents the levels of factors in each combination, and each column represents a specific factor that can be changed from each combination. It can assure a balanced comparison of levels of any factor. The term “main effect” designates the effect on response variables that one can trace to a design parameter. The array is called orthogonal because all columns can be evaluated independently of one another, and the main effect of one factor does not bother the estimation of the main effect of another factor. Recently, some researchers applied the orthogonal design method incorporated with EAs to solve optimization problems. Leung and Wang (Leung & Wang 2001) incorporated orthogonal design in genetic algorithm for numerical optimization problems found such method was more robust and statistically sound. This method was also adopted by other researchers (Ding et al. 1997; Kui-fan et al. 2002; San-You et al. 2005; Wang et al. 2007; Wang et al. 2012) to solve optimization problems. Numerical results demonstrated that these techniques had a significantly better performance than the traditional EAs on the problems studied, and the resulting algorithm can be more robust and statistically sound. In this paper, the orthogonal design is implemented on TLBO (hence called OTLBO) to make it faster and more robust. It is shown empirically that OTLBO has high performance in solving benchmark functions comprising many parameters, as compared with some existing EAs.
The rest of this paper is organized as follows. “Teaching–learning-based optimization” briefly describes TLBO as a function optimization technique and “Orthogonal design” presents some properties of the orthogonal design method. “Proposed orthogonal teaching–learning-based optimizer (OTLBO)” presents the proposed OTLBO. In “Experimental results”, we test our algorithm through some benchmark functions which is followed by discussions and analysis of the optimization experiments for the OTLBO. The last section, “Conclusions and further study”, is devoted to conclusions and future studies.
Teaching–learning-based optimization
This optimization method is based on the effect of the influence of a teacher on the output of learners in a class. It is a population based method and like other population based methods it uses a population of solutions to proceed to the global solution. A group of learners constitute the population in TLBO. In any optimization algorithms there are numbers of different design variables. The different design variables in TLBO are analogous to different subjects offered to learners and the learners’ result is analogous to the ‘fitness’, as in other population-based optimization techniques. As the teacher is considered the most learned person in the society, the best solution so far is analogous to Teacher in TLBO. The process of TLBO is divided into two parts. The first part consists of the “Teacher phase” and the second part consists of the “Learner phase”. The “Teacher phase” means learning from the teacher and the “Learner phase” means learning through the interaction between learners. In the sub-sections below we briefly discuss the implementation of TLBO.
Initialization
Following are the notations used for describing the TLBO
N: number of learners in class i.e. “class size”
D: number of courses offered to the learners
MAXIT: maximum number of allowable iterations
Teacher phase
Where T_{ F } is not a parameter of the TLBO algorithm. The value of T_{ F } is not given as an input to the algorithm and its value is randomly decided by the algorithm using Eq. (5). After conducting a number of experiments on many benchmark functions it is concluded that the algorithm performs better if the value of T_{ F } is between 1 and 2. However, the algorithm is found to perform much better if the value of T_{ F } is either 1 or 2 and hence to simplify the algorithm, the teaching factor is suggested to take either 1 or 2 depending on the rounding up criteria given by Eq. (5).
If $\mathit{Xne}{w}_{\left(i\right)}^{g}$ is found to be a superior learner than ${X}_{\left(i\right)}^{g}$ in generation g , than it replaces inferior learner ${X}_{\left(i\right)}^{g}$ in the matrix.
Learner phase
Algorithm termination
The algorithm is terminated after MAXIT iterations are completed.
Details of TLBO can be refereed in (Rao et al. 2011a; Rao et al. 2012; Rao & Savsani 2012a).
Orthogonal design
Consider an experiment that involves some factors, each of which have several possible values called levels. Suppose that there are P factors, each factor has Q levels. The number of combinations is Q^{ P }, and for large P and Q it is not practical to evaluate all combinations.
- 1)
During the experiment, the array represents a subset of M combinations, from all possible Q^{ P } combinations. Computation is reduced considerably because M << Q^{ P }.
- 2)
Each column represents a factor. If some columns are deleted from the array, it means a smaller number of factors are considered.
- 3)
The columns of the array are orthogonal to each other. The selected subset is scattered uniformly over the search space to ensure its diversity.
A simple but efficient method is proposed in (Wing-Leung & Yuping 2001) to generate an orthogonal array L where M = Q×Q and P = Q + 1. The steps of this method are shown in Algorithm 1.
Proposed orthogonal teaching–learning-based
Proposed orthogonal teaching–learning-based optimizer (OTLBO)
We propose a teaching learning based optimization approach based on orthogonal design (OD). In our approach, termed OTLBO, each learner in the class of learners can be divided into several partial vectors where each of them acts as a factor in the orthogonal design. Orthogonal design is then employed to search the best scales among all the various combinations. Compared to previous OD-based methods (Wing-Leung & Yuping 2001; Wenyin et al. 2006; Shinn-Ying et al. 2008; Sanyou et al. 2007; Kwon-Hee et al. 2003), our proposed algorithm has the following:
OD-based operator and updating strategy
where S is the population size.
The standard TLBO algorithm updates the current learner by comparing with best learner (i.e. teacher) in teacher phase and with a randomly select learner in learner phase. It lacks the interaction between neighboring learners and it may easily trapped into local minima. One technique to address this problem is to employ the multi-parent crossover during evolution and this technique has been shown to improve the convergence rate when applied to GAs.
Given m learners, the question is how to execute the multi-parent efficiently. Since each learner consists of N factors, there are m^{ N } combinations. Consequently, the orthogonal design method is employed to select m (if a L_{ m } (Q^{ P } ) orthogonal array is considered, where P = N and Q = m) representative sets of combinations to shorten the computational time. The procedure of OD-based multi-parent is detailed in Algorithm 2.
Compute the fitness for all р_{i, j}.
Mix р_{i, j} and X_{i, j} and rank learners in the decreasing order of fitness
Select the top m learners as the output.
Remark 1: The OD-based operator behaves as the local search among the selected learners.
Steps of OD-based TLBO
To obtain a more precise solution compared to the standard TLBO, the OD-based operator is employed. The elitism preservation strategy for upgrading the current population is proposed, in which the learner is updated only if its fitness is improved. The procedure for the OD-based TLBO is shown in Algorithm 3. A convergence criterion or the maximum run can be used as the termination condition.
Take the learner g as the output. Remark 2: The convergence of OTLBO is guaranteed because of the elitism preservation strategy. A learner moves only if the movement will lower this objective function.
Experimental results
Benchmark functions, D Dimension, C Characteristic, U Unimodal, M Multimodal, S Separable, N Non-separable
No. | Function | D | C | Range | Formulation | Value |
---|---|---|---|---|---|---|
f _{1} | Step | 30 | US | [−100,100] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{D}{\left({x}_{i}+0.5\right)}^{2}$ | f_{ min }=0 |
f _{2} | Sphere | 30 | US | [−100,100] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{D}{x}_{i}^{2}$ | f_{ min }=0 |
f _{3} | SumSquares | 30 | US | [−100,100] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{D}i{x}_{i}^{2}$ | f_{ min }=0 |
f _{4} | Quartic | 30 | US | [−1.28,1.28] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{D}i{x}_{i}^{4}+\mathit{random}\left(0,1\right)$ | f_{ min }=0 |
f _{5} | Zakharov | 10 | UN | [−5,10] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{D}{x}_{i}^{2}+{\left({\displaystyle \sum}_{i=1}^{D}0.5i{x}_{i}\right)}^{2}+{\left({\displaystyle \sum}_{i=1}^{D}0.5i{x}_{i}\right)}^{4}$ | f_{ min }=0 |
f _{6} | Schwefel 1.2 | 30 | UN | [−100,100] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{D}{\left({\displaystyle \sum}_{j=1}^{i}{x}_{j}\right)}^{2}$ | f_{ min }=0 |
f _{7} | Schwefel 2.22 | 30 | UN | [−10,10] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{D}\left|{x}_{i}\right|+{\displaystyle \prod}_{i=1}^{D}\left|{x}_{i}\right|$ | f_{ min }=0 |
f _{8} | Schwefel 2.21 | 30 | [−100,100] | $f\left(x\right)=\begin{array}{c}\hfill \mathit{max}\hfill \\ \hfill i\hfill \end{array}\left\{\left|{x}_{i}\right|,1\le i\le D\right\}$ | f_{ min }=0 | |
f _{9} | Bohachevsky1 | 2 | MS | [−100,100] | $\begin{array}{c}f\left(x\right)={x}_{1}^{2}+2{x}_{2}^{2}-0.3cos\left(3\pi {x}_{1}\right)-0.4\mathrm{cos}\left(4\pi {x}_{2}\right)+0.7\end{array}$ | f_{ min }=0 |
f _{10} | Bohachevsky2 | 2 | MS | [−100,100] | $\begin{array}{c}f\left(x\right)={x}_{1}^{2}+2{x}_{2}^{2}-0.3cos\left(3\pi {x}_{1}\right)*cos\left(4\pi {x}_{2}\right)+0.3\end{array}$ | f_{ min }=0 |
f _{11} | Bohachevsky3 | 2 | MS | [−100,100] | $\begin{array}{c}f\left(x\right)={x}_{1}^{2}+2{x}_{2}^{2}-0.3cos(\left(3\pi {x}_{1}\right)+\left(4\pi {x}_{2}\right))+0.3\end{array}$ | f_{ min }=0 |
f _{12} | Booth | 2 | MS | [−10,10] | f(x) = (x_{1} + 2x_{2} − 7)^{2} + (2x_{1} + x_{2} − 5)^{2} | f_{ min }=0 |
f _{13} | Rastrigin | 30 | MS | [−5.12,5.12] | $f\left(x\right)={\displaystyle \sum _{i=1}^{D}\left[{x}_{i}^{2}-10cos\left(2\pi {x}_{i}\right)+10\right]}$ | f_{ min }=0 |
f _{14} | Schaffer | 2 | MN | [−100,100] | $f\left(x\right)=\frac{\mathit{si}{n}^{2}\left(\sqrt{{x}_{1}^{2}+{x}_{2}^{2}}\right)-0.5}{{\left(1+0.001\left({x}_{1}^{2}+{x}_{2}^{2}\right)\right)}^{2}}$ | f_{ min }=0 |
f _{15} | Six hump camel back | 2 | MN | [−5,5] | $f\left(x\right)=4{x}_{1}^{2}-2.1{x}_{1}^{4}+\frac{1}{3}{x}_{1}^{6}+{x}_{1}{x}_{2}-4{x}_{2}^{2}+4{x}_{2}^{4}$ | f_{ min } = − 1.03163 |
f _{16} | Griewank | 30 | MN | [−600,600] | $f\left(x\right)=\frac{1}{4000}{\displaystyle \sum}_{i=1}^{D}{x}_{i}^{2}-{\displaystyle \prod}_{i=1}^{D}\mathrm{cos}\left(\frac{{x}_{i}}{\sqrt{i}}\right)+1$ | f_{ min }=0 |
f _{17} | Ackley | 30 | MN | [−32,32] | $\begin{array}{c}f\left(x\right)=-20exp\left(-0.2\sqrt{\frac{1}{D}}{\displaystyle \sum}_{i=1}^{D}{x}_{i}^{2}\right)-exp\left(\frac{1}{n}{\displaystyle \sum}_{i=1}^{D}cos\left(2*\mathit{pi}*{x}_{i}\right)\right)+20+e\end{array}$ | f_{ min }=0 |
f _{18} | Multimod | 30 | MS | [−10,10] | $f\left(x\right)={\displaystyle \sum}_{i=1}^{`D}\left|{x}_{i}\right|{\displaystyle \prod}_{i=1}^{D}\left|{x}_{i}\right|$ | f_{ min }=0 |
f _{19} | Noncontinuous rastrigin | 30 | MS | [−5.12,5.12] | $\begin{array}{l}f\left(x\right)={\displaystyle \sum}_{i=1}^{D}\left[{y}_{i}^{2}-10cos\left(2\pi {y}_{i}\right)+10\right]\phantom{\rule{0.25em}{0ex}}\mathrm{Where}\phantom{\rule{0.25em}{0ex}}{y}_{i}=\left\{\begin{array}{c}\hfill {x}_{i}\phantom{\rule{0.25em}{0ex}}\left|{x}_{i}\right|<0.5\hfill \\ \hfill \frac{\mathit{round}\left(2{x}_{i}\right)}{2}\phantom{\rule{0.25em}{0ex}}\left|{x}_{i}\right|\ge 0.5\hfill \end{array}\phantom{\rule{0.25em}{0ex}}\right.\end{array}$ | f_{ min }=0 |
f _{20} | Weierstrass | 30 | MS | [−0.5, 0.5] | $\begin{array}{l}f\left(x\right)={\displaystyle \sum}_{i=1}^{D}\left({\displaystyle \sum}_{k=0}^{\mathit{\text{kmax}}}\left[{a}^{k}\mathrm{cos}\left(2\pi {b}^{k}\left({x}_{i}+0.5\right)\right)\right]\right)-D{\displaystyle \sum}_{k=0}^{\mathit{kmax}}\left[{a}^{k}\mathrm{cos}\left(2\pi {b}^{k}\left({x}_{i}+0.5\right)\right)\right],\\ \mathrm{where}\phantom{\rule{0.25em}{0ex}}a=0.5,b=3,\mathit{kmax}=20\end{array}$ | f_{ min }=0 |
In part2 of our experiments, attempts are made to compare our proposed approach with the recent variants of PSO as per (Zhan et al. 2009; Ratnaweera et al. 2004). The results of these variants are directly taken from (Zhan et al. 2009; Ratnaweera et al. 2004) and compared with OTLBO. In part3, the performance comparisons are made with the recent variants of DE as per (Zhan et al. 2009). The part4 of our experiments devote to the performance comparison of OTLBO with Artificial Bee Colony (ABC) variants as in (Alatas 2010; Zhu & Kwong 2010; Kang et al. 2011; Gao & Liu 2011). Readers may be intimated here that in all such above mentioned comparisons we have simulated OTLBO, CoDE, EPSDE, basic PSO, DE and TLBO of our own but gained results of other algorithms directly from the referred papers.
No. of fitness evalution comparisons of PSO, DE, TLBO, OTLBO
No. | Function | PSO | DE | TLBO | OTLBO | |
---|---|---|---|---|---|---|
f _{1} | Step | Mean | 40,000 | 2.4833e+4 | 712 | 520.1023 |
Std | 0 | 753.6577 | 30.4450 | 21.0012 | ||
f _{2} | Sphere | Mean | 40,000 | 40,000 | 40,000 | 28712 |
Std | 0 | 0 | 0 | 971.7807 | ||
f _{3} | SumSquares | Mean | 40,000 | 40,000 | 40,000 | 30124 |
Std | 0 | 0 | 0 | 150.0015 | ||
f _{4} | Quartic | Mean | 40,000 | 40,000 | 40,000 | 40,000 |
Std | 0 | 0 | 0 | 0 | ||
f _{5} | Zakharov | Mean | 40,000 | 40,000 | 40,000 | 2.9125e+04 |
Std | 0 | 0 | 0 | 150.1291 | ||
f _{6} | Schwefel 1.2 | Mean | 40,000 | 40,000 | 40,000 | 31200 |
Std | 0 | 0 | 0 | 101.1902 | ||
f _{7} | Schwefel 2.22 | Mean | 40,000 | 40,000 | 40,000 | 40,000 |
Std | 0 | 0 | 0 | 0 | ||
f _{8} | Schwefel 2.21 | Mean | 40,000 | 40,000 | 40,000 | 40,000 |
Std | 0 | 0 | 0 | 0 | ||
f _{9} | Bohachevsky1 | an | 3200 | 4.1111e+03 | 1940 | 1390 |
Std | 51.6398 | 117.5409 | 79.8308 | 30.2312 | ||
f _{10} | Bohachevsky2 | Mean | 3.1429e+03 | 4.2844e+003 | 2.0836e+03 | 1290 |
Std | 200.5150 | 201.8832 | 140.3219 | 48.9012 | ||
f _{11} | Bohachevsky3 | Mean | 4945 | 7.7822e+03 | 2148 | 1260 |
Std | 168.1727 | 140.2739 | 51.4009 | 39.1290 | ||
f _{12} | Booth | Mean | 6420 | 1.2554e+004 | 3.4277e+03 | 2.4519e+03 |
Std | 18.3935 | 803.3543 | 121.4487 | 150.5112 | ||
f _{13} | Rastrigin | Mean | 40,000 | 40,000 | 4.4533e+03 | 2.0912e+03 |
Std | 0 | 0 | 544.6047 | 77.1529 | ||
f _{14} | Schaffer | Mean | 40,000 | 40,000 | 40,000 | 4.0891e+03 |
Std | 0 | 0 | 0 | 149.9123 | ||
f _{15} | Six hump camel back | Mean | 800 | 1.5556e+03 | 720 | 430.2398 |
Std | 99.2278 | 136.7738 | 33.0289 | 23.0348 | ||
f _{16} | Griewank | Mean | 40,000 | 40,000 | 2916 | 1.7655e+003 |
Std | 0 | 0 | 145.0686 | 62.5381 | ||
f _{17} | Ackley | Mean | 40,000 | 40,000 | 40,000 | 40,000 |
Std | 0 | 0 | 0 | 0 | ||
f _{18} | Multimod | Mean | 40,000 | 40,000 | 3488 | 2239 |
Std | 0 | 0 | 30.2715 | 52.2319 | ||
f _{19} | Noncontinuous rastrigin | Mean | 40,000 | 40,000 | 6.1891e+03 | 2.2512e+03 |
Std | 0 | 0 | 75.6887 | 40.8082 | ||
f _{20} | Weierstrass | Mean | 40,000 | 40,000 | 4.0178e+03 | 2.2231e+03 |
Std | 0 | 0 | 110.5696 | 102.1123 |
Experiment 1: OTLBO vs. PSO, DE and TLBO
Parameter settings
In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total evaluation number were chosen to be the same. Population size was 20 and the maximum number fitness function evaluation was 4.0×10^{4} for all functions. The other specific parameters of algorithms are given below:
PSO Settings: Cognitive and social components c_{1}, c_{2}, are constants that can be used to change the weighting between personal and population experience, respectively. In our experiments cognitive and social components were both set to 2. Inertia weight, which determines how the previous velocity of the particle influences the velocity in the next iteration, was 0.5.
DE Settings: In DE, F is a real constant which affects the differential variation between two Solutions and set to F = 0.5*(1+ rand (0, 1)) where rand (0, 1) is a uniformly distributed random number within the range [0, 1] in our experiments. Value of crossover rate, which controls the change of the diversity of the population, was chosen to be R = ( Rmax – Rmin) * (MAXIT–iter) / MAXIT where Rmax=1 and Rmin=0.5 are the maximum and minimum values of scale factor R, iter is the current iteration number and MAXIT is the maximum number of allowable iterations as recommended in (Swagatam & Ajith 2008).
TLBO Settings: For TLBO there is no such constant to set.
OTLBO Settings: For OTLBO, label Q= 6 is chosen emprically.
Performance comparisons of PSO, DE, TLBO, OTLBO
No. | Function | Global min/max | PSO | DE | TLBO | OTLBO | |
---|---|---|---|---|---|---|---|
f _{1} | Step | f_{ min } = 0 | Mean | 203.3667 | 0 | 0 | 0 |
Std | 56.2296 | 0 | 0 | 0 | |||
f _{2} | Sphere | f_{ min } = 0 | Mean | 6.1515e-09 | 7.2140e-14 | 1.0425e-281 | 0 |
Std | 7.6615e-10 | 5.8941e-14 | 0 | 0 | |||
f _{3} | SumSquares | f_{ min } = 0 | Mean | 3.7584e-14 | 6.1535e-15 | 1.5997e-281 | 0 |
Std | 1.0019e-14 | 3.0555e-15 | 0 | 0 | |||
f _{4} | Quartic | f_{ min } = 0 | Mean | 1.9275 | 0.0253 | 2.3477e-04 | 1.6911e-05 |
Std | 1.4029 | 0.0075 | 1.7875e-04 | 1.2211e-05 | |||
f _{5} | Zakharov | f_{ min } = 0 | Mean | 141.0112 | 66.8339 | 1.4515e-281 | 0 |
Std | 40.7567 | 14.4046 | 0 | 0 | |||
f _{6} | Schwefel 1.2 | f_{ min } = 0 | Mean | 9.3619e-08 | 5.3494e-13 | 2.6061e-270 | 0 |
Std | 6.6112e-08 | 4.6007e-13 | 0 | 0 | |||
f _{7} | Schwefel 2.22 | f_{ min } = 0 | Mean | 9.3293 | 3.9546e-07 | 3.1583e-137 | 2.1123e-221 |
Std | 3.6619 | 1.9283e-07 | 1.7188e-137 | 0 | |||
f _{8} | Schwefel 2.21 | f_{ min } = 0 | Mean | 60.9603 | 1.5340 | 4.3819e-136 | 4.0091e-215 |
Std | 4.0761 | 0.3900 | 1.5668e-136 | 0 | |||
f _{9} | Bohachevsky1 | f_{ min } = 0 | Mean | 0 | 0 | 0 | 0 |
Std | 0 | 0 | 0 | 0 | |||
f _{10} | Bohachevsky2 | f_{ min } = 0 | Mean | 0 | 0 | 0 | 0 |
Std | 0 | 0 | 0 | 0 | |||
f _{11} | Bohachevsky3 | f_{ min } = 0 | Mean | 0 | 0 | 0 | 0 |
Std | 0 | 0 | 0 | 0 | |||
f _{12} | Booth | f_{ min } = 0 | Mean | 0 | 0 | 0 | 0 |
Std | 0 | 0 | 0 | 0 | |||
f _{13} | Rastrigin | f_{ min } = 0 | Mean | 76.2918 | 5.6344 | 0 | 0 |
Std | 17.1005 | 1.8667 | 0 | 0 | |||
f _{14} | Schaffer | f_{ min } = 0 | Mean | 0.0097 | 0.0029 | 0.0066 | 0 |
Std | 0.0025 | 0.0011 | 0.0045 | 0 | |||
f _{15} | Six hump camel back | f_{ min } = − 1.03163 | Mean | −1.0316 | −1.0316 | −1.0316 | −1.0316 |
Std | 0 | 0 | 0 | 0 | |||
f _{16} | Griewank | f_{ min } = 0 | Mean | 7.6291e-08 | 5.7841e-011 | 0 | 0 |
Std | 4.0012e-09 | 1.6914e-011 | 0 | 0 | |||
f _{17} | Ackley | f_{ min } = 0 | Mean | 14.0614 | 7.3814e-08 | 1.7171e-15 | 1.1123e-15 |
Std | 2.0125 | 3.0453e-08 | 1.5979e-15 | 1.0021e-15 | |||
f _{18} | Multimod | f_{ min } = 0 | Mean | 2.1994e-257 | 2.5678e-255 | 0 | 0 |
Std | 0 | 0 | 0 | 0 | |||
f _{19} | Noncontinuous rastrigin | f_{ min } = 0 | Mean | 100.3984 | 13.9237 | 0 | 0 |
Std | 28.7062 | 2.3146 | 0 | 0 | |||
f _{20} | Weierstrass | f_{ min } = 0 | Mean | 12.0447 | 1.5388e-05 | 0 | 0 |
Std | 2.6160 | 1.0139e-05 | 0 | 0 |
t value, significant at a 0.05 level of significance by two tailed test using Table 3
Function no. | PSO/OTLBO | DE/OTLBO | TLBO/OTLBO |
---|---|---|---|
f _{1} | + | NA | NA |
f _{2} | + | + | + |
f _{3} | + | + | + |
f _{4} | + | + | + |
f _{5} | + | + | + |
f _{6} | + | + | + |
f _{7} | + | + | + |
f _{8} | + | + | + |
f _{9} | NA | NA | NA |
f _{10} | NA | NA | NA |
f _{11} | NA | NA | NA |
f _{12} | NA | NA | NA |
f _{13} | + | + | NA |
f _{14} | + | + | + |
f _{15} | NA | NA | NA |
f _{16} | + | + | NA |
f _{17} | + | + | + |
f _{18} | + | + | NA |
f _{19} | + | + | NA |
f _{20} | + | + | NA |
Average ranking of optimization algorithm based on the performance using Table 3
Algorithm | PSO | DE | TLBO | OTLBO |
---|---|---|---|---|
Ranking | 3.575 | 2.825 | 2.05 | 1.55 |
Experiments 2: OTLBO vs. OEA, HPSO-TVAC, CLPSO and APSO
Performance comparisons OTLBO, OEA, HPSO-TVAC, CLPSO and APSO
Function | OEA | HPSO-TVAC | CLPSO | APSO | OTLBO | Significant | |
---|---|---|---|---|---|---|---|
Sphere | Mean | 2.48e-30 | 3.38e-41 | 1.89e-19 | 1.45e-150 | 0 | + |
Std | 1.128e-29 | 8.50e-41 | 1.49e-19 | 5.73e-150 | 0 | ||
Schwefel 2.22 | Mean | 2.068e-13 | 6.9e-23 | 1.01e-13 | 5.15e-84 | 2.1123e-221 | + |
Std | 2.440e-12 | 6.89e-23 | 6.54e-14 | 1.44e-83 | 0 | ||
Schwefel 1.2 | Mean | 1.883e-09 | 2.89e-07 | 3.97e+02 | 1.0e-10 | 0 | + |
Std | 3.726e-9 | 2.97e-07 | 1.42e+02 | 2.13e-10 | 0 | ||
Step | Mean | 0 | 0 | 0 | 0 | 0 | NA |
Std | 0 | 0 | 0 | 0 | 0 | ||
Rastrigin | Mean | 5.430e-17 | 2.39 | 2.57e-11 | 5.8e-15 | 0 | + |
Std | 1.683e-16 | 3.71 | 6.64e-11 | 1.01e-14 | 0 | ||
Noncontinous Rastrigin | Mean | N | 1.83 | 0.167 | 4.14e-16 | 0 | + |
Std | N | 2.65 | 0.379 | 1.45e-15 | 0 | ||
Ackley | Mean | 5.336e-14 | 2.06e-10 | 2.01e-12 | 1.11e-14 | 3.1123e-15 | + |
Std | 2.945e-13 | 9.45e-10 | 9.22e-13 | 3.55e-15 | 1.0021e-15 | ||
Griewank | Mean | 1.317e-02 | 1.07e-02 | 6.45e-13 | 1.67e-02 | 0 | + |
Std | 1.561e-02 | 1.14e-02 | 2.07e-12 | 2.41e-02 | 0 |
Average ranking of optimization algorithm based on the performance using Table 6
Algorithm | OEA | HPSO | CLPSO | APSO | OTLBO |
---|---|---|---|---|---|
Ranking | 3.428571 | 3.71428 | 3.85714286 | 2.71428571 | 1.28571429 |
Experiment 3: OTLBO vs. JADE, jDE, SaDE, CoDE, EPSDE
Performance comparisons OTLBO, JADE, jDE ,SaDE,CoDE, EPSDE
Function | FEs | SaDE | jDE | JADE | CoDE | EPSDE | OTLBO | Significant | |
---|---|---|---|---|---|---|---|---|---|
Sphere | 1.5× 10^{5} | Mean | 4.5e-20 | 2.5e-28 | 1.8e-60 | 1.12e-31 | 1.53e-85 | 0 | + |
Std | 1.9e-14 | 3.5e-28 | 8.4e-60 | 3.45-31 | 9.01e-86 | 0 | |||
Schwefel 2.22 | 2.0× 10^{5} | Mean | 1.9e-14 | 1.5e-23 | 1.8e-25 | 1.22e-23 | 3.18e-54 | 0 | + |
Std | 1.1e-14 | 1.0e-23 | 8.8e-25 | 3.90e-23 | 3.11e-54 | 0 | |||
Schwefel 1.2 | 5.0× 10^{5} | Mean | 9.0e-37 | 5.2e-14 | 5.7e-61 | 7.86e-31 | 4.81e-76 | 0 | + |
Std | 5.4e-36 | 1.1e-13 | 2.7e-60 | 1.86e-32 | 1.90e-76 | 0 | |||
Step | 1.0× 10^{4} | Mean | 9.3e+02 | 1.0e+03 | 2.9e+0 | 3.00e+00 | 0 | 0 | NA |
Std | 1.8e+02 | 2.2e+02 | 1.2e+0 | 1.90E+00 | 0 | 0 | |||
Rastrigin | 1.0× 10^{5} | Mean | 1.2e-03 | 1.5e-04 | 1.0e-04 | 1.21e-01 | 0 | 0 | NA |
Std | 6.5e-04 | 2.0e-04 | 6.0e-05 | 3.89e-02 | 0 | 0 | |||
Schwefel 2.21 | 5.0× 10^{5}. | Mean | 7.4e-11 | 1.4e-15 | 8.2e-24 | 2.44e-27 | 1.94e-2 | 0 | + |
Std | 1.82e-10 | 1.0e-15 | 4.0e-23 | 1.89e-27 | 8.90e-4 | 0 | |||
kley | 5.0× 10^{4} | Mean | 2.7e-03 | 3.5e-04 | 8.2e-10 | 1.18e-04 | 5.36e-13 | 2.78e-15 | + |
Std | 5.1e-04 | 1.0e-04 | 6.9e-10 | 4.90e-04 | 4.77e-14 | 1.56e-15 | |||
Griewank | 5.0× 10^{4} | Mean | 7.8e-04 | 1.9e-05 | 9.9e-08 | 1.74e-07 | 0 | 0 | NA |
Std | 1.2e-03 | 5.8e-05 | 6.0e-07 | 2.33e-07 | 0 | 0 |
Average ranking of optimization algorithm based on the performance using Table 6
Algorithm | SaDE | jDE | JADE | CoDE | EPSDE | OTLBO |
---|---|---|---|---|---|---|
Ranking | 5.375 | 4.875 | 3 | 4.25 | 2.3125 | 1.1875 |
Experiment 4: OTLBO vs. CABC, GABC ,RABC and IABC
Performance comparisons of OTLBO, CABC, GABC ,RABC and IABC
Function | FEs | CABC | GABC | RABC | IABC | OTLBO | Significant | |
---|---|---|---|---|---|---|---|---|
Sphere | 1.5× 10^{5} | Mean | 2.3e-40 | 3.6e-63 | 9.1e-61 | 5.34e-178 | 0 | + |
Std | 1.7e-40 | 5.7e-63 | 2.1e-60 | 0 | 0 | |||
Schwefel 2.22 | 2.0× 10^{5} | Mean | 3.5e-30 | 4.8e-45 | 3.2e-74 | 8.82e-127 | 0 | + |
Std | 4.8e-30 | 1.4e-45 | 2.0e-73 | 3.49e-126 | 0 | |||
Schwefel 1.2 | 5.0× 10^{5} | Mean | 8.4e+02 | 4.3e+02 | 2.9e-24 | 1.78e-65 | 0 | + |
Std | 9.1e+02 | 8.0e+02 | 1.5e-23 | 2.21e-65 | 0 | |||
Step | 1.0× 10^{4} | Mean | 0 | 0 | 0 | 0 | 0 | NA |
Std | 0 | 0 | 0 | 0 | 0 | |||
Rastrigin | 5.0× 10^{4} | Mean | 1.3e-00 | 1.5e-10 | 2.3e-02 | 0 | 0 | + |
Std | 2.7e-00 | 2.7e-10 | 5.1e-01 | 0 | 0 | |||
Schwefel 2.21 | 5.0× 10^{5} | Mean | 6.1e-03 | 3.6e-06 | 2.8e-02 | 4.98e-38 | 0 | + |
Std | 5.7e-03 | 7.6e-07 | 1.7e-02 | 8.59e-38 | 0 | |||
Ackley | 5.0× 10^{4} | Mean | 1.0e-05 | 1.8e-09 | 9.6e-07 | 3.87e-14 | 2.7812e-15 | + |
Std | 2.4e-06 | 7.7e-10 | 8.3e-07 | 8.52e-15 | 1.5611e-15 | |||
Griewank | 5.0× 10^{4} | Mean | 1.2e-04 | 6.0e-13 | 8.7e-08 | 0 | 0 | + |
Std | 4.6e-04 | 7.7e-13 | 2.1e-08 | 0 | 0 |
Average ranking of optimization algorithm based on the performance using Table 7
Algorithm | CABC | GABC | RABC | IABC | OTLBO |
---|---|---|---|---|---|
Ranking | 4.625 | 3.25 | 3.75 | 2.00 | 1.375 |
Conclusions and further study
In this work orthogonal design approach is implemented to optimize global benchmark functions using basic Teaching-Learning based Optimization (TLBO). The proposed approach is known as Orthogonal TLBO (OTLBO).The benefit is derived in making the basic TLBO fasters with our proposed approach. Orthogonal design makes the search efficient in a large sample space to arrive at optimum solution. The paper discusses the fundamentals of orthogonal design and its framework in TLBO. The performance comparisons are done with TLBO and other evolutionary computation techniques like particle swarm optimization (PSO), Differential evolution (DE), artificial bee colony (ABC) and several of variants of these algorithms suggested by other researchers. From the results analysis it is evident that OTLBO outperforms all other approaches including basic TLBO for all benchmark functions investigated in our work. The efficiency of the proposed approach is compared with other algorithms in terms of number of function evaluations (FEs). We can conclude by saying that OTLBO is a very powerful approach of optimizing different types of problems which are separable, non-separable, unimodal and multimodal in providing quality optimum results in faster convergence time compared to very popular evolutionary techniques like PSO, DE, ABC and its variants. As a further research it remains to be seen how this adapts to multi-objective optimization problems and also some engineering applications from mechanical, chemical or data mining may be investigated.
Declarations
Authors’ Affiliations
References
- Alatas B: Chaotic bee colony algorithms for global numerical optimization. Expert Syst Appl 2010, 37: 5682-5687. 10.1016/j.eswa.2010.02.042View ArticleGoogle Scholar
- Das S, Abraham A, Chakraborty UK, Konar A: Differential evolution using a neighborhood-based mutation operator. IEEE Trans Evol Comput 2009, 13: 526-553.View ArticleGoogle Scholar
- Ding CM, Zhang CS, Liu GZ: Orthogonal experimental genetic algorithms and its application in function optimization. Systems Engineering and Electronics 1997, 10: 57-60.Google Scholar
- Fang KT, Ma CX: Orthogonal and Uniform Design. Science Press, Beijing; 2001.Google Scholar
- Fogel B: Evolutionary Computation: Towards a New Philosophy of Machine Learning. IEEE press, Piscataway, NJ; 1995.Google Scholar
- Gao W, Liu S: Improved artificial bee colony algorithm for global optimization. Inf Process Lett 2011, 111: 871-882. 10.1016/j.ipl.2011.06.002View ArticleGoogle Scholar
- Kang F: J. J Li, Z.Y. Ma, Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions. Inform Sci 2011, 12: 3508-3531.View ArticleGoogle Scholar
- Kang F, Li JJ, Ma ZY: Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions. Inform Sci 12, 2011: 3508-3531.Google Scholar
- Kui-fan SHI, Ji-wen DONG, Jin-ping LI, et al.: Orthogonal genetic algorithm (in Chinese). ACTA ELECTRONICA SINICA 2002, 10: 1501-1504.Google Scholar
- Kwon-Hee L, Jeong-Wook Y, Joon-Seong P, et al.: “An optimization algorithm using orthogonal arrays in discrete design space for structures.”. Finite Elements in Analysis and Design 2003, 40: 121-135. 10.1016/S0168-874X(03)00095-7View ArticleGoogle Scholar
- Leung YW, Wang Y: An orthogonal genetic algorithm with quantization for global numerical optimization. IEEE Trans Evol Comput 2001, 5: 41-53. 10.1109/4235.910464View ArticleGoogle Scholar
- Mallipeddi R, Suganthan PN, Pan QK, Tasgetiren MF: Differential evolution algorithm with ensemble of parameters and mutation strategies. Applied Soft Computing 2011, 11(Issue 2):1679-1696.View ArticleGoogle Scholar
- Rao RV, Kalyankar VD: Parameter optimization of machining processes using a new optimization algorithm. Mater Manuf Process 2012. 10.1080/10426914.2011.602792Google Scholar
- Rao RV, Patel VK: Multi-objective optimization of combined Brayton and inverse Brayton cycles using advanced optimization algorithms. Eng Optim 2012. 10.1080/0305215X.2011.624183Google Scholar
- Rao VJ, Savsani JB: Teaching learning based optimization algorithm for constrained and unconstrained real parameter optimization problems. Eng Optim 2012. 10.1080/0305215X.2011.652103Google Scholar
- Rao RV, Savsani VJ: Mechanical design optimization using advanced optimization techniques. Springer-Verlag London, UK; 2012.View ArticleGoogle Scholar
- Rao RV, Savsani VJ, Vakharia DP: Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Computer-Aided Design 2011, 43(3):303-315. 10.1016/j.cad.2010.12.015View ArticleGoogle Scholar
- Rao RV, Savsani VJ, Vakharia DP: Teaching–Learning-Based Optimization: An optimization method for continuous non-linear large scale problems INS 9211 No. of Pages 15, Model 3G. 2011.Google Scholar
- Rao RV, Savsani VJ, Vakharia DP: Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems. Inform Sci 2012, 183(1):1-15. 10.1016/j.ins.2011.08.006View ArticleGoogle Scholar
- Ratnaweera A, Halgamuge S, Watson H: Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 2004, 8: 240-255. 10.1109/TEVC.2004.826071View ArticleGoogle Scholar
- Sanyou Z, Hui S, Lishan K, Lixin D: “Orthogonal dynamic hill climbing algorithm: ODHC. Evolutionary Computation in Dynamic and Uncertain Environments 2007, 79-104.Google Scholar
- San-You ZENG, Wei WEI, Li-Shan KANG, et al.: A multi-objective evolutionary algorithm based on orthogonal design (in Chinese). Chinese Journal of Computers 2005, 28: 1153-1162.Google Scholar
- Shi YH, Eberhart RC: ” Comparison between genetic algorithms and particle swarm optimization” in Proc. 7th, Int. Conf. Evol. Program, LNCS 1447. 1998, 611-616.Google Scholar
- Shinn-Ying H, Hung-Sui L, Weei-Hurng L, et al.: “OPSO: Orthogonal particle swarm optimization and its application to task assignment Problems.”. IEEE transactions on Systems, Man, And Cybernetics, Part A: SYSTEMS AND HUMANS 2008, 38: 288-298.View ArticleGoogle Scholar
- Suresh Chandra S, Anima N: Data Clustering Based on Teaching-Learning Based Optimization. Swarm, Evolutionary, and Memetic Computing, Lecture Notes in Computer Science. 2011. 7077/2011, 148–156 10.1007/978-3-642-27242-4_18Google Scholar
- Suresh Chandra S, et al.: “Improved teaching learning optimization for global function optimization" Decision Science Letters 2. 2012. 10.5267/j.dsl.2012.10.005Google Scholar
- Swagatam D, Ajith A: Senior Member, IEEE, and Amit Konar, Member, IEEE,”Automatic Clustering Using an Improved Differential Evolution Algorithm”, ieee transactions on systems, man, and cybernetics—part a: systems and humans. 2008, no. 1. 38Google Scholar
- Vedat T: Design of planar steel frames using Teaching–Learning Based Optimization. Engineering Structures. 2012, 225-232. 34Google Scholar
- Wang Y, Liu H, Cai Z, Zhou Y: An orthogonal design based constrained evolutionary optimization algorithm. Eng Optim 2007, 39(6):715-736. 10.1080/03052150701280541View ArticleGoogle Scholar
- Wang Y, Cai Z, Zhang Q: Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans Evol Comput 2011, 15(1):55-66.View ArticleGoogle Scholar
- Wang Y, Cai Z, Zhang Q: Enhancing the search ability of differential evolution through orthogonal crossover. Inform Sci 2012, 185(1):153-177. 10.1016/j.ins.2011.09.001View ArticleGoogle Scholar
- Wenyin G, Zhihua C, Charles L: “ODE: a fast and robust differential evolution based on orthogonal design.”. In Proc. 19th Australian Joint Conference on Artificial Intelligence. Hobart, Australia; 2006:709-718.Google Scholar
- Wing-Leung Y, Yuping W: “An orthogonal genetic algorithm with quantization for global numerical Optimizationg”. IEEE Trans Evol Comput 2001, 5(1):40-53.Google Scholar
- Zhan ZH, Zhang J, Li Y, Chung SH: Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 2009, 39: 1362-1381.View ArticleGoogle Scholar
- Zhu GP, Kwong S: Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl Math Comput 2010, 217: 3166-3173. 10.1016/j.amc.2010.08.049View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.