Workshop W5-BECOMLLM: BEComLLM: Bridging Evolutionary Computing and Large Language Models

Proposers

Niki van Stein, Anna V. Kononova, Thomas Bäck, Roman Senkerik and Michal Pluhace

Workshop Code

Please use the following code when submitting your paper to this Workshop: W5-BECOMLLM

Scope and Aims

The primary goal of this workshop is to explore the potential of combining Large Language Models (LLMs) and Evolutionary Computing (EC) to advance research in both fields and their interconnection. This synergy aims to expand the boundaries of optimisation, artificial intelligence, and machine learning by exploring new methodologies and applications. By fostering a collaborative platform for researchers and practitioners, the session aims to:

  • Encourage innovative approaches that leverage the strengths of LLMs and EC techniques.
  • Enable the creation of more adaptive, efficient, and scalable algorithms by integrating evolutionary mechanisms with advanced LLM capabilities.
  • Inspire novel research directions that could reshape AI, specifically LLMs, and optimisation fields through this hybridisation.
  • Achieve a better understanding and explanation of how these two seemingly disparate fields are related and how knowledge of their functions and operations can be leveraged.

Content and Objectives

The session will focus on a range of topics at the intersection of LLMs and Evolutionary Computing, with the following key objectives:

  1. Evolutionary Prompt Engineering: Develop effective prompt optimisation strategies using evolutionary algorithms to maximise the utility of LLMs in tasks such as text generation, summarisation, and question answering.
  2. LLM-Guided Evolutionary Algorithms: Investigate how LLMs can be integrated into evolutionary algorithms to guide search processes, provide domain expertise, and generate candidate solutions. Exploring new ways of using LLMs for evolutionary operators (generating variation, selection,…).
  3. Evolutionary Learning: hybridizing LLMs, EC and ELO to iteratively evolve and refine solutions.
  4. Co-evolution of LLMs and EC Techniques: Examine the co-evolutionary process where both LLMs and EC techniques evolve together to solve complex, multimodal, and multi-objective problems.
  5. Benchmarking and Comparative Studies: Conduct studies to compare the effectiveness of LLM-enhanced evolutionary approaches with traditional EC methods across a variety of optimisation challenges.
  6. LLMs for Automated Code Generation in EC: Explore how LLMs can be used to automatically generate or refine code for evolutionary algorithms, potentially reducing development time and improving adaptability.
  7. Optimisation of LLM Architectures: Use evolutionary algorithms to fine-tune hyperparameters, architectures, and training processes of LLMs to boost their performance on specific tasks.
  8. Applications in Real-World Problems: Demonstrate practical applications of LLM-EC hybrid models in areas such as engineering, healthcare, finance, and creative industries, showcasing their real-world impact and utility.
  9. Better understanding, fine-tuning, model size, selection of model, and adaptation of Large Language Models for EC.

These content areas aim to push the boundaries of current research, driving both theoretical advancements and practical applications.