Prof. Dr Hab. Ravipudi Venkata Rao B.Tech., M.Tech., Ph.D., D.Sc. (Poland)
Professor, Department of Mechanical Engineering Former Dean (Academics) and Head (Mech. Engg. Dept.) Sardar Vallabhbhai National Institute of Technology (SV NIT) {An Institute of National Importance of Government of India} Ichchanath, Surat-395 007, Gujarat State, INDIA. Phones: 919925207027 (Cell); 912612201661 (R) E-mail: ravipudirao@gmail.com https://scholar.google.com/citations?hl=en&user=4NoqGCEAAAAJ&view_op=list_works Book on TLBO algorithm:
Teaching-Learning-Based Optimization (TLBO) Algorithm And Its Engineering Applications. Springer International Publishing, Switzerland, 2016, DOI:10.1007/978-3-319-22732-0.
The population based heuristic algorithms have two important groups: evolutionary algorithms (EA) and swarm intelligence (SI) based algorithms. Some of the recognized evolutionary algorithms are: Genetic Algorithm (GA), Evolution Strategy (ES), Evolution Programming (EP), Differential Evolution (DE), etc. Some of the well known swarm intelligence based algorithms are: Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Fire Fly (FF) algorithm, Cat Swarm Optimization (CSO), Arificial Immune Algorithm (AIA), etc. Besides the evolutionary and swarm intelligence based algorithms, there are some other algorithms which work on the principles of different natural phenomena. Some of them are: Harmony Search (HS) algorithm, Gravitational Search Algorithm (GSA), Biogeography-Based Optimization (BBO), Flower Pollination Algorithm (FPA), Ant Lion Optimization (ALO), Invasive Weed Optimization (IWO), etc. All the evolutionary and swarm intelligence based algorithms are probabilistic algorithms and require common controlling parameters like population size, number of generations, elite size, etc. Besides the common control parameters, different algorithms require their own algorithm-specific control parameters. For example, GA uses mutation probability, crossover probability, selection operator; PSO uses inertia weight, social and cognitive parameters; ABC uses number of onlooker bees, employed bees, scout bees and limit; HS algorithm uses harmony memory consideration rate, pitch adjusting rate, and the number of improvisations. Similarly, the other algorithms such as ES, EP, DE, SFL, ACO, FF, CSO, AIA, GSA, BBO, FPA, ALO, IWO, etc. need the tuning of respective algorithm-specific parameters. The proper tuning of the algorithm-specific parameters is a very crucial factor which affects the performance of the above mentioned algorithms. The improper tuning of algorithm-specific parameters either increases the computational effort or yields the local optimal solution. Considering this fact, Rao et al. (2011) introduced the teaching-learning-based optimization (TLBO) algorithm which does not require any algorithm-specific parameters. The TLBO algorithm requires only common controlling parameters like population size and number of generations for its working. The TLBO algorithm has gained wide acceptance among the optimization researchers. The TLBO algorithm is a teaching-learning process inspired algorithm and is based on the effect of influence of a teacher on the output of learners in a class. The algorithm describes two basic modes of the learning: (i) through teacher known as teacher phase) and (ii) through interaction with the other learners (known as learner phase). In this optimization algorithm a group of learners is considered as population and different subjects offered to the learners are considered as different design variables of the optimization problem and a learner’s result is analogous to the ‘fitness’ value of the optimization problem. The best solution in the entire population is considered as the teacher. The design variables are actually the parameters involved in the objective function of the given optimization problem and the best solution is the best value of the objective function. Teaching-learning-based optimization (TLBO) is a population-based algorithm which simulates the teaching-learning process of the class room. This algorithm requires only the common control parameters such as the population size and the number of generations and does not require any algorithm-specific control parameters. The working of TLBO is divided into two parts, ‘Teacher phase’ and ‘Learner phase’. Working of both these phases is explained below. 1. Teacher phase It is the first part of the algorithm where learners learn through the teacher. During this phase a teacher tries to increase the mean result of the class in the subject taught by him or her depending on his or her capability. At any iteration i, assume that there are ‘m’ number of subjects (i.e. design variables), ‘n’ number of learners (i.e. population size, k=1,2,…,n) and Mj,i be the mean result of the learners in a particular subject ‘j’ (j=1,2,…,m) The best overall result Xtotal-kbest,i considering all the subjects together obtained in the entire population of learners can be considered as the result of best learner kbest. However, as the teacher is usually considered as a highly learned person who trains learners so that they can have better results, the best learner identified is considered by the algorithm as the teacher. The difference between the existing mean result of each subject and the corresponding result of the teacher for each subject is given by, Difference_Mean_{j,k,i} = r_{i} (X_{j,kbest,i }- T_{F}M_{j,i}) (1)
Where, X_{j,kbest,i }is the result of the best learner in subject j. T_{F} is the teaching factor which decides the value of mean to be changed, and r_{i} is the random number in the range [0, 1]. Value of T_{F} can be either 1 or 2. The value of T_{F} is decided randomly with equal probability as, T_{F} = round [1+rand (0,1){2-1}] (2 T_{F} is not a parameter of the TLBO algorithm. The value of T_{F} is not given as an input to the algorithm and its value is randomly decided by the algorithm using Eq. (2). After conducting a number of experiments on many benchmark functions it is concluded that the algorithm performs better if the value of T_{F }is between 1 and 2. However, the algorithm is found to perform much better if the value of T_{F} is either 1 or 2 and hence to simplify the algorithm, the teaching factor is suggested to take either 1 or 2 depending on the rounding up criteria given by Eq. (2). Based on the Difference_Mean_{j,k,i,} the existing solution is updated in the teacher phase according to the following expression. X'_{j,k,i}_{ }= X_{j,k,i}_{ }+ Difference_Mean_{j,k,i} (3) Where, X'_{j,k,i}_{ }is the updated value of X_{j,k,i}. X'_{j,k,i} is accepted_{ }if it gives better function value. All the accepted function values at the end of the teacher phase are maintained and these values become the input to the learner phase. The learner phase depends upon the teacher phase. 2. Learner phase It is the second part of the algorithm where learners increase their knowledge by interaction among themselves. A learner interacts randomly with other learners for enhancing his or her knowledge. A learner learns new things if the other learner has more knowledge than him or her. Considering a population size of ‘n’, the learning phenomenon of this phase is explained below. Randomly select two learners P and Q such That X'_{total-P,i} ≠ X'_{total-Q,i} (where, X'_{total-P,i} and X'_{total-Q,i} are the updated function values of X_{total-P,i} and X_{total-Q,i} of P and Q respectively at the end of teacher phase)
X''_{j,P,i}_{ }= X'_{j,P,i}_{ }+ r_{i} (X'_{j,P,i}_{ }- X'_{j,Q,i}), If X'_{total-P,i} < X'_{total-Q,i } (4) X''_{j,P,i}_{ }= X'_{j,P,i}_{ }+ r_{i} (X'_{j,Q,i }- X'_{j,P,i}), If X'_{total-Q,I }< X'_{total-P,i }(5)
X''_{j,P,I }_{ }is accepted if it gives a better function value. The Eqs. (4) and (5) are for minimization problems. In the case of maximization problems, the Eqs. (6) and (7) are used. X''_{j,P,i}_{ }= X'_{j,P,i}_{ }+ r_{i} (X'_{j,P,i}_{ }- X'_{j,Q,i}), If X'_{total-Q,i} < X'_{total-P,i } (6) X''_{j,P,i}_{ }= X'_{j,P,i}_{ }+ r_{i} (X'_{j,Q,i }- X'_{j,P,i}), If X'_{total-P,i }< X'_{total-Q,i }(7) The flowchart of the TLBO algorithm is given below:
To understand different steps of the TLBO algorithm by means of examples, please refer to the following paper: Review of applications of TLBO algorithm and a tutorial for beginners to solve the unconstrained and constrained optimization problems. Decision Science Letters, 5(1), 1-30. https://drive.google.com/file/d/0B96X2BLz4rx-OVpoaE5ucFEyanM/view?usp=sharing
Working of TLBO algorithm is explained in this paper, step-by-step, for unconstrained and constrained standard benchmark functions. You may download the paper from the above link and go through the steps given to understand the working of the algorithm. The MATLAB code of the TLBO algorithm for the Constrained Benchmark Function G01 of CEC 2006 can be downloaded from the following link. This code is for demonstration. https://drive.google.com/file/d/0B96X2BLz4rx-VUQ3OERMZGFhUjg/view?usp=sharing Professor R. Venkata Rao has developed another algorithm-specific parameter-less advanced optimization algorithm, named as "Jaya Algorithm". Refer to the following website for details: |