Welcome to this presentation on Evolutionary Algorithms. Our algorithm class which is adapting principles of nature. I am Alexander Dorkhorn and this talk has been designed in conjunction with the workshop at CEC 2023, a Sandbox for Teaching and Learning in CI for Pre-University and Undergraduate students. To get you a basic introduction on the concept of evolution, I want to start with the history of evolution itself. In 1858 Charles Darwin and Alfred Russell Wallace published a new evolutionary theory which explained in detail endowments on the original species. In here, it was the first time that this was described that Life, as we know, it emerged from many generations of adaptation. And, the most underlying principle of it is survival of the fittest which is a biological concept of fitness which describes the reproductive success of a species. And, evolutionary algorithms are an optimization scheme which is inspired by this principle of nature. In here, we use a function that assigns a quality to each candidate solution a fitness function the good solutions are supposed to reproduce and create new offspring and bad solutions will be discarded. After a while the best solutions will be returned. There are five basic components of evolutionary algorithms. First of all, we have the representation which defines how new solution candidates are generated and represented, then we have the fitness measure and the selection according to the fitness. The fitness measure itself rates the success of our solution according to the given tasks in our selection measure. We then identify the good solutions and head them over through the mutation and crossover operators. In crossover, the third component we need to recombine multiple parents candidate solutions into a new childhood solution. This child solution is furthermore mutated which represents a small modification to further improve it. Finally, we have the termination criterion which tells us how to end this repeating cycle of multiple populations and generations. On the right, you see this basic scheme. We start with a single population we evaluated to find the best solutions until termination. We repeat by crossovering good solutions and mutating them, finally, output the best solution. Let's go into detail into these components. First of all, we handle representation. The representation itself depends on the problem to be solved and defines our search space. Each solution has multiple types of representation. First of all, we have the genotype which describe the parameter set. Then, we use a generator which is a method that uses the parameters to actually build a solution candidate and create a phenotype representation In here, that is the outcome produced by the generator, for example, a car an image or a neural network. The car itself may be represented by an array of values such as the size, the color, and the speed. And, the generator builds the car according to this genotypical representation. Finally, on the right, we see how such a phenotype car could look like. It could vary in the size in the color and the different speeds that emerge from these configurations. Next, we have the fitness function which allows us to evaluate candidate solutions. There are three general types of fitness evaluations. It's called direct evaluation, simulation- based evaluation, and interactive evaluation. The first one is directly based on the phenotype. It's usually fast to compute and easy to implement. Given the example of our car, before example want to maximize the speed of it. The next one is simulation based evaluation. In here, we test the phenotype in a simulated environment. This may take more time but can yield better results in terms of a better approximation of the candidate's quality. For example, we have asked the car driver to drive on three tracks and perform and measure how good the car performs on those. Finally, we have the interactive evaluation which is based on the interaction with a human user. This allows us to adapt to the user's needs. For example, the user may report that he likes red cars more than blue cars. Therefore, we adapt the solutions to represent more red cars. Based on the fitness, we have the selection mechanism. Here, we aim to select a reverse set of good solutions. The most simple algorithm would be elitism in which we just choose the best and solutions to be part the underlying part of our next generation. The problem in here is that only keeping the best solution quickly yields convergence and loses variety. Better mechanisms to be used are stochastic sampling algorithms such as tournament selection and roulette wheel selection. And, the tournament selection we select a random subset of case solutions and take the best solution of the subset. We repeat this process n times. This way, a tournament may consist of solutions that are not the best ones such that the best out of those non-optimal solutions will be selected for the next generation. Alternatively, we can use roulette wheel selection in which we assign a probability on each solution to be sampled for based on its fitness value. Both of these methods ensure that weaker solutions have a chance to be selected and therefore increase the diversity of our solutions. And, we have the genetic operators which of which the first one will be cross over. In here, we recombine two or more individuals and explore the design space in between them. On the top right, you see multiple flowers. The two flowers on the left will be used to reproduce the two flowers on the right by changing their shape and color in between them. The most simple case of applying a crossover on genotypical representations is the one point crossover. In here, we choose a cutoff point in the parameter set and exchange parameters behind the cutoff between two individuals. It is represented on the right based on two number strings. Then, we have the uniform crossover. In here, we sample randomly from the two parents. This can yield us more diverse solutions since we can choose multiple cutoff points to select in between two parents. Next, we have the mutation. In mutation, we randomly adapt a single individual. Usually, this is done by slightly changing its parameters and is mostly used for fine-tuning solutions but can also be used to introduce more randomness. On the top right, we once again see a flower and you see that either the color slightly changes or its petal length is changed. The same can be represented on the genotype by using for example a bit or value mutation in which values of the genotype are just replaced by random new ones or we use a swap shuffle or invert mutation in which the respective operator is applied to a subset of the vector encoded in our genotype. Here's an example that combines all of the things that we discussed in the car example that we introduced at the beginning. The task is here to create a 2D car that is able to drive on a rough terrain. Each car is represented by a set of fields and a polygon and we will see the different colors of these cars where the currently best car will always be colored in blue. The cars will drive on the track and the speed and the distance they reach is their fitness. We combine and mutate cars in between several generations and you will see how the simulation is restarted several times. Over time, we will also see how we did increase the difficulty and the cars improve. So, let's have a look. In here, we see how a single car dominates the race track and in the next generation, multiple cars will be based on this car such that it improves over time and we can see how new designs can emerge from that. So, the most successful design from the last generation will be kept over in the next generations. And, therefore we may explore very successful cars over multiple turns. If you want to dive deeper into evolutionary algorithms I want to recommend you some references for your starting. First of all, there is the book computational intelligence which not just covers evolutionary algorithms but also other topics that have been addressed in this workshop like neural networks and fuzzy systems. Furthermore, there is an interactive article in which you can explore the use cases of representation mutation and crossover in genetic algorithms yourself. There is a slightly older version freely accessible online which I can only recommend too. Finally, the examples from the flowers were taken from a recent research paper by Sebastian Breezy et al. you can look at it at the provided link. This is the end of my talk and if you have any questions feel free to reach out to me via email and check out the communications options provided on our webpage. Thank you very much for the attention and I hope you enjoyed the remaining talks as well.