To assess the effectiveness of our search algorithm using NSGA-III, we compared our techniqe with state-of-the-art techniques, namely NSGA-II, NSGA-III, and IBEA. In the following, we describe each one of these techniques.
Non-dominated Sorting Genetic Algorithm II (NSGA-II) is a strong elitist algorithm based on Genetic Algorithms following the concept of elitism, and crowding distance during the search process. The NSGA II's algorithm requires as input the population size 𝞪 and the maximum number of generations . First of all, a randomly generated initial population P is created of size 𝞪, and then each solution is assigned a fitness value after calculating the objective functions (Line 1--2). Then, the binary tournament selection, recombination, and mutation operators are applied to create the offspring population P' of size 𝞪 and then the algorithm combines P and P' before splitting the combined pool into fronts (F_1, F_2, ... , F_k). Finally, NSGA-II conducts a niching process by adding a crowding distance to each member.
The crowding distance is a metric used for calculating how far a solution is from the other ones on the same front. NSGA-II uses this metric in its selection operator aiming to keep a diverse front, by making sure each member stays a crowding distance far. At the same time, this procedure keeps the diversity of the population, and it helps the algorithm to explore the fitness landscape.
The following figure shows the non-dominated sorting process where P_t is the parent population and Q_t is the offspring one at the generation t. F_1, F_2, and F_3 are fronts already sorted by the union of P_t and Q_t in which F_1 are the best solutions from this combination (parent and offspring), F_2 are the second best ones and so on. However, it is not possible to include the whole F_3 inside P_{t+1}. So, the crowding distance is used and the best ones (far solutions) are selected from F_3 and added in P_{t+1}. The process remains running until the number of generation is reached. After that, the best non-dominated front is returned.
NSGAIII
Non-dominated Sorting Genetic Algorithm III (NSGA-III) is a more recent optimization algorithm proposed similar to NSGA-II, but with significant changes in its selection mechanism aiming to improve the results of many-objective problems. Unlike in NSGA-II, the diversity among population members in NSGA-III is aided by supplying a number of well-spread reference points. NSGA-III demonstrates its efficacy in solving 2 to 15-objective optimization problems, and it is also extended easily to solve constrained optimization problems, and can be used with a small population size (such as a population of size 100 for a 10-objective optimization problem). The algorithm is shown bellow:
First, same as NSGA-II, the parent population P_{t} is randomly initialized in the specified domain, then the binary tournament selection, crossover and mutation operators are applied to create an offspring population Q_{t} (Line 1--2). Thereafter, both populations are combined and sorted according to their domination level and the best N members are selected for the next generation. Unlike in NSGA-II (which uses the crowding distance measure for selecting the best set of points from the last front that can be partially accepted), in NSGA-III the supplied reference points Z_{r} are used to select these remaining members. The chosen reference points can either be predefined in a structured manner or supplied preferentially by the user. To accomplish this, objective values and reference points are first normalized to have an identical range. Thereafter, orthogonal distance between a member in S_{t} and each of the reference lines (joining the ideal point and a reference point) is calculated.
Next, the member is then associated with the reference point having the smallest orthogonal distance, and the niche count φ for each reference point, defined as the number of members in S_{t}/F_{l} that are associated with the reference point, is computed for further processing. The reference point having the minimum niche count is identified and the member in front last front F_{l} that is associated with the identified reference point is included in the final population. The niche count of the identified reference point is increased by one and the procedure is repeated to fill up population P_{t+1}.
IBEA
The Indicator-Based Evolutionary Algorithm (IBEA) presents an indicator concept to comprehensively evaluate the solution quality. The indicators are used to measure the quality of solutions at each generation, and to select the solution set with better quality values than those ones from the previous generation. Because the indicator is a comprehensive evaluation, it does not only allows adaptation to arbitrary preference information and optimization scenarios, but also does not need any diversity preservation mechanisms. This is the advantage of IBEA over other optimization algorithms and it is the reason that IBEA is widely used in many applications.
There are several indicators that can be used with IBEA, and the results are generally dependent on them (an indicator is just defined as a ``function that maps N sets (or points) into a real number''). For that, an indicator 'I' must be "dominance preserving"}. Typically, there are two IBEA indicators: i) Hypervolume indicator; and ii) Additive 𝞪 -indicator. The hypervolume indicator, is a comprehensive quality evaluation method of a solution set. It evaluates coverage, homogeneity and universality simultaneously, and then obtains comprehensive evaluation results. The second one is Additive 𝞪 -indicator, which is a binary quality indicator and is defined as the minimum distance. Other examples for binary quality indicators that could be used as an indicator. Details of the algorithm are given bellow.
The algorithm takes as input values 𝞪 that represents the population size and m as the maximum number of generations. A initial population P is created and the fitness value for each individual in P is calculated (Lines 3--4). Next, a binary tournament selection is performed with replacement on P (Line 5) and recombination operators (crossover and mutation ones) are applied aiming to create a mating offspring P' (Line 6). Both, P and P' are merged in P and then the environmental selection is applied where the individuals with the worst fitness values are removed from P until |P| > 𝞪 (Lines 8--12) and the fitness values of the remaining individuals are updated. After that, the generation counter is incremented. If the stopping criteria is reached, the final approximate Pareto-front S is return from the non-dominated individuals in P (Line 15). Otherwise, the search process is repeated.