Large Language Models for and with Evolutionary Computation (LLMfwEC)
Workshop at GECCO 2024, July 14-18th, 2024, Melbourne, Australia
Home
Overview and Scope
Large language models (LLMs), along with other Foundational Models (generative AI methods), have disrupted conventional expectations of Artificial Intelligence and Machine Learning systems. An LLM processes natural language text prompts as input and responds with the resulting pattern matching and sequence completion with output in natural language text. In contrast, Evolutionary Computation(EC) is inspired by Neo-Darwinian evolution and they conduct black-box search and optimization. What brings these two approaches together?
One answer is evolutionary search heuristics, with operators that use LLMs to fulfill their function. This hybridization turns the conventional paradigm that ECs use on its head, and in turn, sometimes yields high performing, and novel EC systems.
Another answer is using LLM for EC. Many fields have experienced significant growth, with numerous nature-inspired algorithms being developed to solve complex problems. EC has become the target or source of many hybrid approaches and analyses, combinations of the advantages of multiple algorithms, the introduction of adaptive techniques that improve their performance, and special tools. LLMs may help researchers in the selection of feasible candidates from the pool of algorithms based on user- specified goals and provide a basic description of the methods, or propose novel hybrid methods. Further, the models can help identify and describe distinct components suitable for adaptive enhancement, or hybridization, and finally provide a pseudo-code, implementation, and reasoning for the proposed methodology.
This workshop calls for papers at the intersection of EC and LLMs, an area we call "EC with LLM" and "LLM for EC". We invite original research papers discussing from the connection between LLMs and EC. This workshop is focused on algorithms that were developed on a solid foundation of theory, analyses, evidence, well defined balancing between exploration and exploitation like Genetic Algorithms (GA), Genetic Programming (GP), Evolution Strategies (ES), Differential Evolution (DE), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and more.
Topics of Interest
It includes(but is not restricted to the following topics):
How can an EA using an LLM evolve different of units of evolution, e.g. code, strings, images, multi-modal candidates?
How can an EA using an LLM solve prompt composition or other LLM development and use challenges?
How can an EA using an LLM integrate design explorations related to cooperation, modularity, reuse, or competition?
How can an EA using an LLM model biology?
How can an EA using an LLM intrinsically, or with guidance, support open-ended evolution?
What new variants hybridizing EC and/or another search heuristic are possible and in what respects are they advantageous?
What are new ways of using LLMs for evolutionary operators, e.g. new ways of generating variation through LLMs, as with LMX or ELM, or new ways of using LLMs for selection, as with e.g. Quality-Diversity through AI Feedback)
How well does an EA using an LLM scale with population size and problem complexity?
What is the most accurate computational complexity of an EA using an LLM?
What makes good EA plus LLM benchmark?
Better understanding, fine tuning, and adaptation of Large Language Models for EC. How large do LLMs need to be? Are there benefits for using larger/smaller ones? Ones trained on different datasets or in different ways?
Generating methodology for population dynamics analysis, population diversity measures, control, and analysis and visualization.
Generating rules for EC (boundary and constraints handling strategies).
The performance improvement, testing, and efficiency of the improved algorithms.
Reasoning for component-wise analysis of algorithms.
Understanding and generation of relations between Complex systems, Randomness, Chaos, and Fractals in EC.
Connection of LLM and other ML techniques for EC (Reinforcement learning, AutoML)
Generation and reasoning for parallel approaches for EC for swarm algorithms
Applications of LLM and EC (not limited to):
constrained optimization
multi-objective optimization
expensive and surrogate assisted optimization
dynamic and uncertain optimization
large-scale optimization
combinatorial/discrete optimization
Submissions
We invite submissions of the following types of papers:
research papers (up to 8 pages)
position papers (up to 4 pages)
Accepted submissions will be presented during the workshop and will appear in the GECCO Companion ACM proceedings. Paper's format should follow the GECCO 2024 instructions.
Submissions of early and in-progress work are encouraged. Authors of accepted papers proposing novel software developments will be encouraged to give a demo or a short introductory tutorial. Authors of accepted papers describing novel software or technical developments will be encouraged to give a demonstration during the workshop.
Instructions
Submitted papers are required to be in compliance with the GECCO 2024 Papers Submission Instructions.
Workshop papers must be submitted using the GECCO submission site.
Important Dates
Submission opening: February 12, 2024
Submission deadline: April 8, 2024, Extended to April 12 2024
Notification to authors: May 3, 2024
Camera-ready papers: May 10, 2024
Workshop date: TBC depending on GECCO program schedule (July 14 or 15, 2024)
Organizers
Erik Hemberg (Co-chair) MIT CSAIL, USA
Roman Senkerik (Co-chair) Tomas Bata University in Zlin, Faculty of Applied Informatics, Czech Republic
Joel Lehman (Co-chair) IT University of Copenhagen
Una-May O’Reilly MIT CSAIL, USA
Pier Luca Lanzi Politecnico di Milano
Michal Pluhacek Tomas Bata University in Zlin, A.I.Lab, Czech Republic
Tome Eftimov Jožef Stefan Institute, Slovenia
Schedule
Preliminary:
○ [5 min.] Welcome & Opening by the workshop organizers
○ [15 min.] LLM Fault Localisation within Evolutionary Computation Based Automated Program Repair; Bin Murtaza, McCoy, Ren, Murphy, Banzhaf
○ [15 min.] Comparing Large Language Models and Grammatical Evolution for Code Generation; Custode, Migliore Rambaldi, Roveri, Iacca
○ [15 min.] L-AutoDA: Large Language Models for Automatically Evolving Decision-based Adversarial Attacks; Guo, Liu, Lin, Zhao, Zhang
○ [15 min.] An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms; Custode, Caraffini, Yaman, Iacca
○ [15 min.] A Critical Examination of Large Language Model Capabilities in Iteratively Refining Differential Evolution Algorithm; Pluhacek, Kovac, Janku, Kadavy, Senkerik, Viktorin
○ [25 min.] Panel discussion. Panelists: Una-May O’Reilly and TBC
○ [5 min.] Goodbye & Closing by the workshop organizers
Contact
Send an email to hembergerik@csail.mit.edu including "LLMfwEC-2024" in the subject