Rao Algorithms &
R-method
Prof. Dr Hab. Ravipudi Venkata Rao
B.Tech., M.Tech., Ph.D., D.Sc. (Poland)
Professor (Higher Administrative Grade),
Department of Mechanical Engineering
Dean (Faculty Welfare)
Sardar Vallabhbhai National Institute of Technology (SV NIT)
{An Institute of National Importance of Government of India}
Ichchanath, Surat-395 007, Gujarat State, INDIA.
Phones: 919925207027 (Cell); 912612201661 (R)
scholar.google.com/citations?hl=en&user=4NoqGCEAAAAJ&view_op=list_works
Professor Ravipudi Venkata Rao of Sardar Vallabhbhai National Institute of Technology, Surat, Gujarat, India has developed three advanced optimization algorithms, named as "Rao Algorithms".
The details of the algorithms are given below:
In recent years the field of population based meta-heuristic algorithms is flooded with a number of ‘new’ algorithms based on metaphor of some natural phenomena or behavior of animals, fishes, insects, societies, cultures, planets, musical instruments, etc. Many new optimization algorithms are coming up every month and the authors claim that the proposed algorithms are ‘better’ than the other algorithms. Some of these newly proposed algorithms are dying naturally as there are no takers and some have received success to certain extent. However, this type of research may be considered as a threat and may not contribute to advance the field of optimization. It would be better if the researchers focus on developing simple optimization techniques that can provide effective solutions to the complex problems instead of looking for developing metaphor based algorithms. Keeping this point in view, three simple metaphor-less and algorithm-specific parameter-less optimization algorithms are developed.
- Proposed algorithms
Let f(x) is the objective function to be minimized (or maximized). At any iteration i, assume that there are ‘m’ number of design variables, ‘n’ number of candidate solutions (i.e. population size, k=1,2,…,n). Let the best candidate best obtains the best value of f(x) (i.e. f(x)best) in the entire candidate solutions and the worst candidate worst obtains the worst value of f(x) (i.e. f(x)worst) in the entire candidate solutions. If Xj,k,i is the value of the j th variable for the k th candidate during the i th iteration, then this value is modified as per the following equations.
Equation (1) (Rao-1 algorithm):
X'j,k,i = Xj,k,i + r1,j,i (Xj,best,i - Xj,worst,i)
Equation (2) (Rao-2 algorithm):
X'j,k,i = Xj,k,i + r1,j,i (Xj,best,i - Xj,worst,i) + r2,j,i (│Xj,k,i or Xj,l,i│- │Xj,l,i or Xj,k,i│)
Equation (3) (Rao-3 algorithm):
X'j,k,i = Xj,k,i + r1,j,i (Xj,best,i -│Xj,worst,i│) + r2,j,i (│Xj,k,i or Xj,l,i│- (Xj,l,i or Xj,k,i))
where, Xj,best,i is the value of the variable j for the best candidate and Xj,worst,i is the value of the variable j for the worst candidate during the i th iteration. X'j,k,i is the updated value of Xj,k,i and r1,j,i and r2,j,i are the two random numbers for the j th variable during the i th iteration in the range [0, 1]. In Eqs.(2) and (3), the term Xj,k,i or Xj,l,i indicates that the candidate solution k is compared with any randomly picked candidate solution l and the information is exchanged based on their fitness values. If the fitness value of kth solution is better than the fitness value of lth solution then the term “Xj,k,i or Xj,l,i” becomes Xj,k,i. On the other hand, if the fitness value of lth solution is better than the fitness value of kth solution then the term “Xj,k,i or Xj,l,i” becomes Xj,l,i. Similarly, if the fitness value of kth solution is better than the fitness value of lth solution then the term “Xj,l,i or Xj,k,i” becomes Xj,l,i. If the fitness value of lth solution is better than the fitness value of kth solution then the term “Xj,l,i or Xj,k,i” becomes Xj,k,i.
The flowchart of Rao-1 algorithm is shown in Fig.1.
These three algorithms are based on the best and worst solutions in the population and the random interactions between the candidate solutions. Just like TLBO algorithm (Rao, 2015) and Jaya algorithm (Rao, 2016; Rao, 2019), these algorithms do not require any algorithm-specific parameters and thus the designer’s burden to tune the algorithm-specific parameters to get the best results is eliminated. These algorithms are named as Rao-1, Rao-2 and Rao-3 respectively. Fig. 1 shows the flowchart of Rao-1 algorithm. The flowchart will be same for Rao-2 and Rao-3 algorithms except that the Eq. (1) shown in the flowchart will be replaced by Eq. (2) and Eq. (3) respectively.
Step-by-step demonstration of the working of proposed Rao algorithms is explained in the first paper provided in Research Articles link, using a standard benchmark function known as Sphere function. You may download the paper and go through the steps given in the paper to understand the working of the algorithm.
In Self-Adaptive Multi-Population (SAMP)-Rao algorithms, the following modifications are made to basic Rao algorithms:
(i) The proposed SAMP-Rao algorithms use a number of sub-populations by splitting the total population into the number of groups based on the quality of the solutions. The use of the number of sub-populations spread the solutions over the search space rather than focusing on a particular region. Therefore, the proposed algorithms are expected to reach an optimum solution.
(ii) SAMP-Rao algorithms change the number of sub-populations adaptively during the search process based on the quality of the fitness value. It means that the number of sub-populations will be increased or decreased. This feature supports the search process for searching the optimum solution and for enhancing the diversification of the search process. Furthermore, duplicate solutions are replaced by newly generated solutions to maintain diversity and to enhance the exploration procedure.
The MATLAB codes of Rao Algorithms and Self-Adaptive Multi-Population (SAMP)-Rao algorithms for the unconstrained and constrained benchmark functions can be downloaded from the following links. The users are requested to make use of these codes as reference codes.
The flowchart of SAMP-Rao algorithm is given below.
PYTHON Codes for Rao 1, Rao 2 and Rao 3 algorithms for a benchmark function known as 'Sphere' are given below.
1 '''
2 Rao1 Algorithm code for Sphere function '''
4 import random #import random module
5 import math #import math module requiured for floor function
6 import numpy as np #import numpy to use its NDArrays
7 run,Runs = 0,10 #set number of Runs required
8 best_val = np.zeros(Runs) #to store best score of each run
9 while run<Runs:
10 maxfes = 10000 #max function evaluations
11 dim = 30 #number of dimensions/design variables
12 pop_size = 10 #population size or sample size
13 max_iter = math.floor(maxfes/pop_size) #maximum number of iterations
14 lb = -100*np.ones(dim) #lower bound
15 ub = 100*np.ones(dim) #upper bound
16 def fitness(particle): #definition of function required
17 y=0
18 for i in range(dim):
19 y = y + particle[i]**2 #sphere function
20 return y
21 Positions = np.zeros((pop_size,dim)) #define variable with required size
22 best_pos = np.zeros(dim) #define population's best position with required size
23 worst_pos = np.zeros(dim) #population's worst position with required size
24 finval = np.zeros(max_iter) #store best value for each iteration
25 f1 = np.zeros(pop_size) #function value of current population
26 f2 = np.zeros(pop_size) #function value of updated population
27 #assign random values to the population within the bounds
28 for i in range(dim):
29 Positions[:,i] = np.random.uniform(0,1,pop_size)*(ub[i]-lb[i])+lb[i]
30 for k in range(0,max_iter):
31 best_score = float("inf")
32 worst_score = float("-inf")
33 for i in range(0,pop_size):
34 #Return back the population that go beyond the bounds to the bounds
35 for j in range(dim):
36 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
37 f1[i] = fitness(Positions[i,:]) #applying sphere function on population
38 #update best and worst score and positions
39 if f1[i] < best_score: #change the sign when using for maximization
40 best_score = f1[i].copy();
41 best_pos = Positions[i,:].copy()
42 if f1[i] > worst_score: #change the sign when using for maximization
43 worst_score = f1[i].copy();
44 worst_pos = Positions[i,:].copy()
45 #adding best score to the finval variable
46 finval[k] = best_score
47 #print the best value for every 500th iteration 48 if ((k+1)%500==0):
49 print("For run",run+1,"the best solution is:",best_score,"in iteration number: ",k+1)
50 Positioncopy = Positions.copy() #copying the values to compare later
51 for i in range(0,pop_size):
52 for j in range(0,dim):
53 #use Rao Algorithm to find new values of population
54 r1 = random.random()
55 Positions[i,j] = Positions[i,j] + r1*(best_pos[j]-worst_pos[j])
56 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
57 f2[i] = fitness(Positions[i,:])
58 #compare new values with old ones and use better ones
59 for i in range(0,pop_size):
60 if (f1[i] < f2[i]): #change the sign when using for maximization
61 Positions[i,:] = Positioncopy[i,:]
62 best_score = np.amin(finval) #find the minimum of all the best_score
63 print("The best solution for run",run+1,"is:",best_score)
64 best_val[run] = best_score #store best score for run
65 run+=1 #increment run
66 print("The Best solution is:",np.min(best_val))
67 print("The Worst solution is:",np.max(best_val))
68 print("The Mean is:",np.mean(best_val))
69 print("The Standard Deviation is:",np.std(best_val))
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 '''
2 Rao2 Algorithm code for Sphere function '''
4 import random #import random module
5 import math #import math module requiured for floor function
6 import numpy as np #import numpy to use its NDArrays
7 run,Runs = 0,10 #set number of Runs required
8 best_val = np.zeros(Runs) #to store best score of each run
9 while run<Runs:
10 maxfes = 10000 #max function evaluations
11 dim = 30 #number of dimensions/design variables
12 pop_size = 10 #population size or sample size
13 max_iter = math.floor(maxfes/pop_size) #maximum number of iterations
14 lb = -100*np.ones(dim) #lower bound
15 ub = 100*np.ones(dim) #upper bound
16 def fitness(particle): #definition of function required
17 y=0
18 for i in range(dim):
19 y = y + particle[i]**2 #sphere function
20 return y
21 Positions = np.zeros((pop_size,dim)) #define variable with required size
22 best_pos = np.zeros(dim) #define population's best position with required size
23 worst_pos = np.zeros(dim) #population's worst position with required size
24 finval = np.zeros(max_iter) #store best value for each iteration
25 f1 = np.zeros(pop_size) #function value of current population
26 f2 = np.zeros(pop_size) #function value of updated population
27 #assign random values to the population within the bounds
28 for i in range(dim):
29 Positions[:,i] = np.random.uniform(0,1,pop_size)*(ub[i]-lb[i])+lb[i]
30 for k in range(0,max_iter):
31 best_score = float("inf")
32 worst_score = float("-inf")
33 for i in range(0,pop_size):
34 #Return back the population that go beyond the bounds to the bounds
35 for j in range(dim):
36 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
37 f1[i] = fitness(Positions[i,:]) #applying sphere function on population
38 #update best and worst score and positions
39 if f1[i] < best_score: #change the sign when using for maximization
40 best_score = f1[i].copy();
41 best_pos = Positions[i,:].copy()
42 if f1[i] > worst_score: #change the sign when using for maximization
43 worst_score = f1[i].copy();
44 worst_pos = Positions[i,:].copy()
45 #adding best score to the finval variable
46 finval[k] = best_score
47 #print the best value for every 500th iteration 48 if ((k+1)%500==0):
49 print("For run",run+1,"the best solution is:",best_score,"in iteration number: ",k+1)
50 Positioncopy = Positions.copy() #copying the values to compare later
51 for i in range(0,pop_size):
52 r = np.random.randint(pop_size,size=1) #generate random integer for comparison
53 #check for ensuring the populatoin is not compared with itself
54 while(r==i):
55 r = np.random.randint(pop_size,size=1)
56 #Use Rao Algorithm to find new values
57 if f1[i]<f1[r]:
58 for j in range(0,dim):
59 r1 = random.random() #random value between 0 to 1
60 r2 = random.random() #random value between 0 to 1
61 Positions[i,j] = Positioncopy[i,j] +
r1*(best_pos[j]-worst_pos[j]) +
r2*(np.abs(Positioncopy[i,j])-np.abs(Positioncopy[r,j]))
62 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
63 else:
64 for j in range(0,dim):
65 r1 = random.random() #random value between 0 to 1
66 r2 = random.random() #random value between 0 to 1
67 Positions[i,j] = Positioncopy[i,j] +
r1*(best_pos[j]-worst_pos[j]) +
r2*(np.abs(Positioncopy[r,j])-np.abs(Positioncopy[i,j]))
68 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
69 f2[i] = fitness(Positions[i,:])
70 #compare new values with old ones and use better ones
71 for i in range(0,pop_size):
72 if (f1[i] < f2[i]): #change the sign when using for maximization
73 Positions[i,:] = Positioncopy[i,:]
74 best_score = np.amin(finval) #find the minimum of all the best_score
75 print("The best solution for run",run+1,"is:",best_score)
76 best_val[run] = best_score #store best score for run
77 run+=1 #increment run
78 print("The Best solution is:",np.min(best_val))
79 print("The Worst solution is:",np.max(best_val))
80 print("The Mean is:",np.mean(best_val))
81 print("The Standard Deviation is:",np.std(best_val))
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 '''
2 Rao3 Algorithm code for Sphere function 3 '''
4 import random #import random module
5 import math #import math module requiured for floor function
6 import numpy as np #import numpy to use its NDArrays
7 run,Runs = 0,10 #set number of Runs required
8 best_val = np.zeros(Runs) #to store best score of each run
9 while run<Runs:
10 maxfes = 10000 #max function evaluations
11 dim = 30 #number of dimensions/design variables
12 pop_size = 10 #population size or sample size
13 max_iter = math.floor(maxfes/pop_size) #maximum number of iterations
14 lb = -100*np.ones(dim) #lower bound
15 ub = 100*np.ones(dim) #upper bound
16 def fitness(particle): #definition of function required
17 y=0
18 for i in range(dim):
19 y = y + particle[i]**2 #sphere function
20 return y
21 Positions = np.zeros((pop_size,dim)) #define variable with required size
22 best_pos = np.zeros(dim) #define population's best position with required size
23 worst_pos = np.zeros(dim) #population's worst position with required size
24 finval = np.zeros(max_iter) #store best value for each iteration
25 f1 = np.zeros(pop_size) #function value of current population
26 f2 = np.zeros(pop_size) #function value of updated population
27 #assign random values to the population within the bounds
28 for i in range(dim):
29 Positions[:,i] = np.random.uniform(0,1,pop_size)*(ub[i]-lb[i])+lb[i]
30 for k in range(0,max_iter):
31 best_score = float("inf")
32 worst_score = float("-inf")
33 for i in range(0,pop_size):
34 #Return back the population that go beyond the bounds to the bounds
35 for j in range(dim):
36 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
37 f1[i] = fitness(Positions[i,:]) #applying sphere function on population
38 #update best and worst score and positions
39 if f1[i] < best_score: #change the sign when using for maximization
40 best_score = f1[i].copy();
41 best_pos = Positions[i,:].copy()
42 if f1[i] > worst_score: #change the sign when using for maximization
43 worst_score = f1[i].copy();
44 worst_pos = Positions[i,:].copy()
45 #adding best score to the finval variable
46 finval[k] = best_score
47 #print the best value for every 500th iteration 48 if ((k+1)%500==0):
49 print("For run",run+1,"the best solution is:",best_score,"in iteration number: ",k+1)
50 Positioncopy = Positions.copy() #copying the values to compare later
51 for i in range(0,pop_size):
52 r = np.random.randint(pop_size,size=1) #generate random integer for comparison
53 #check for ensuring the populatoin is not compared with itself
54 while(r==i):
55 r = np.random.randint(pop_size,size=1)
56 #Use Rao Algorithm to find new values
57 if f1[i]<f1[r]:
58 for j in range(0,dim):
59 r1 = random.random() #random value between 0 to 1
60 r2 = random.random() #random value between 0 to 1
61 Positions[i,j] = Positioncopy[i,j] + r1*(best_pos[j]-np.abs(worst_pos[j])) + r2*(np.abs(Positioncopy[i,j])-Positioncopy[r,j])
62 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
63 else:
64 for j in range(0,dim):
65 r1 = random.random() #random value between 0 to 1
66 r2 = random.random() #random value between 0 to 1
67 Positions[i,j] = Positioncopy[i,j] +
r1*(best_pos[j]-np.abs(worst_pos[j])) +
r2*(np.abs(Positioncopy[r,j])-Positioncopy[i,j])
68 Positions[i,j] = np.clip(Positions[i,j],lb[j],ub[j])
69 f2[i] = fitness(Positions[i,:])
70 #compare new values with old ones and use better ones
71 for i in range(0,pop_size):
72 if (f1[i] < f2[i]): #change the sign when using for maximization
73 Positions[i,:] = Positioncopy[i,:]
74 best_score = np.amin(finval) #find the minimum of all the best_score
75 print("The best solution for run",run+1,"is:",best_score)
76 best_val[run] = best_score #store best score for run
77 run+=1 #increment run
78 print("The Best solution is:",np.min(best_val))
79 print("The Worst solution is:",np.max(best_val))
80 print("The Mean is:",np.mean(best_val))
81 print("The Standard Deviation is:",np.std(best_val))
-------------------------------------------------------------------------------------------------------------------------------------------
Multi-objective optimization using Rao algorithms is explained in a paper published in the Springer Journal of "Engineering with Computers". Please refer to the Articles Link or
https://link.springer.com/article/10.1007/s00366-020-01008-9
You may also refer to the following research papers:
A self-adaptive population Rao algorithm for optimization of selected bio-energy systems
RV Rao, HS Keesari
Journal of Computational Design and Engineering 8 (1), 2021, 69-96
Weight Optimization of a Truss Structure Using Rao Algorithms and Their Variants
RV Rao, RB Pawar, S Khatir, TC Le
Structural Health Monitoring and Engineering Structures, 3-18
RV Rao, RB Pawar
Journal of Computational Design and Engineering 7 (6), 2020, 830-863
Constrained design optimization of selected mechanical system components using Rao algorithms
RV Rao, RB Pawar
Applied Soft Computing 89, 2020, 106141
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Information about TLBO and Jaya algorithms:
For the Teaching-Learning-Based Optimization (TLBO) algorithm developed by Professor Rao, please refer to:
https://sites.google.com/site/tlborao/
The following book on TLBO algorithm was published in 2016:
R. V. Rao (2016). Teaching-Learning-Based Optimization (TLBO) Algorithm And Its Engineering Applications. Springer International Publishing, Switzerland.
2. For the Jaya algorithm developed by Professor Rao, please refer to:
https://sites.google.com/site/jayaalgorithm/
The following book on Jaya algorithm was published in 2019:
R. V. Rao (2019). Jaya: An Advanced Optimization Algorithm and its Engineering Applications. Springer International Publishing, Switzerland.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
R-method
A new multi-attribute decision making (MADM) named as "R-method" is developed and published in 2021.
Please refer to the following two papers (these can be downloaded free):
Ranking of Pareto-optimal solutions and selecting the best solution in multi- and many-objective optimization problems using R-method. Soft Computing Letters, 3 (2021) 100015.
https://linkinghub.elsevier.com/retrieve/pii/S2666222121000058
R-method: A simple ranking method for multi-attribute decision-making in the industrial environment
Journal of Project Management, 6 (4) (2021), 1-8.
MATLAB Code for R-method is given below:
format long
format loose
clc
clear all
disp('Enter the number of objectives')
n=input('Number of objectives =');
disp('Enter the number of alternatives')
r=input('Number of alternatives=');
N=n;
c=N
n=1;
k=1;
i=1;
disp(sprintf('Please enter data for rank of %d Alternative',i))
while (i<=r)
j=1;
while (j<=c)
data(i,j)=input(sprintf('Enter the data for rank of %d Alternatives and %d Objectives=',i,j));
j=j+1;
end
i=i+1;
if (i<=r)
disp(sprintf('please enter the data for rank of %d Alternative',i))
end
end
%% generation of weight and assignment to rank given by the user %%
n=1;
k=1;
i=1;
j=1;
ss=max(max(data));
while (n<=max(N,r))
for j=1:n
a(j,k)=1/j;
end
b(1,k)=a(1,k);
i=2;
while(i<=n)
b(i,k)= b((i-1),k)+a(i,k);
i=i+1;
end
for i=1:n
c(i,k) =1/(b(i,k));
end
s(1,k)=0;
for i=1:n
s(1,k)=s(1,k)+(c(i,k));
end
for i=1:n
w(i,k)=c(i,k)/s(1,k);
end
n=n+1;
k=k+1;
end
disp(w(:,N));
for (jj=1:N)
for (j=1:max(N,r))
aa=0;
for (i=1:r)
if j==data(i,jj)
aa=aa+1;
end
end
if aa==1
p=find(data(1:r,jj)==j);
data(p,jj)=w(j,ss);
end
if aa>1
tt=aa; % total repetition of rank
aa=aa/2;
aa=aa-0.5;
kk=j-aa;
data(1:r,jj);
p=find(data(1:r,jj)==j);
ww=sum(w(kk:j+aa,ss))/tt;
pp=1;
while (kk<=j+aa && kk>=0)
data(p(pp,1),jj)=ww;
pp=pp+1;
kk=kk+1;
end
end
j=j+1;
end
jj=jj+1;
end
for (jj=1:N)
for (j=0.5:max(N,r))
aa=0;
for (i=1:r)
if j==data(i,jj)
aa=aa+1;
end
end
if aa==1
p=find(data(1:r,jj)==j);
data(p,jj)=w(j,ss);
end
if aa>1
tt=aa; % total repetition of rank
aa=aa/2;
aa=aa-0.5;
kk=j-aa;
data(1:r,jj);
p=find(data(1:r,jj)==j);
ww=sum(w(kk:j+aa,ss))/tt;
pp=1;
while (kk<=j+aa && kk>=0)
data(p(pp,1),jj)=ww;
pp=pp+1;
kk=kk+1;
end
end
j=j+1;
end
jj=jj+1;
end
rw=rand(1,N);
for (j=1:N)
disp(sprintf('Please enter %d objectives rank',j))
rr(1,j)=input('Rank =');
%rw(1,j)=w(rr,N);
end
for (i=1:N)
aa=0;
for (j=1:N)
if i==rr(1,j)
aa=aa+1;
end
end
if aa==1
p=find(rr==i);
rw(1,p)=w(i,N);
end
if aa>1
tt=aa; % total repetition of rank
aa=aa/2;
aa=aa-0.5;
kk=i-aa;
p=find(rr==i);
ww=sum(w(kk:i+aa,N))/tt;
pp=1;
while (kk<=i+aa && kk>=0)
rw(1,p(1,pp))=ww;
pp=pp+1;
kk=kk+1;
end
end
i=i+1;
end
j=1;
for (i=0.5:N)
aa=0;
for (j=1:N)
if i==rr(1,j)
aa=aa+1;
end
end
if aa==1
p=find(rr==i);
rw(1,p)=w(i,N);
end
if aa>1
tt=aa; % total repetition of rank
aa=aa/2;
aa=aa-0.5;
kk=i-aa;
p=find(rr==i);
ww=sum(w(kk:i+aa,N))/tt;
pp=1;
while (kk<=i+aa && kk>=0)
rw(1,p(1,pp))=ww;
pp=pp+1;
kk=kk+1;
end
end
i=i+1;
end
c=N;
i=1;
j=1;
%% generation of Composite score %%
i==1;
j==1;
r;
Compositescores=data*rw'
X=Compositescores;
[~,ii]=sort(X,'Descend');
[~,rank]=sort(ii)