Yulan He
Short bio:
Yulan He is a Professor in Natural Language Processing at the Department of Informatics in King’s College London. She is currently holding a prestigious 5-year UKRI Turing AI Fellowship. Her recent research focused on addressing the limitations of Large Language Models (LLMs), aiming to enhance their reasoning capabilities, robustness, and explainability. She has published nearly 300 papers on topics such as self-evolution and reasoning of LLMs, mechanistic interpretability, and LLMs for educational assessment. She received several prizes and awards for her research, including an SWSA Ten-Year Award, a CIKM Test-of-Time Award, and was recognised as an inaugural Highly Ranked Scholar by ScholarGPS. She served as the General Chair for AACL-IJCNLP 2022 and a Program Co-Chair for various conferences such as ECIR 2024, CCL 2024, and EMNLP 2020. Her research has received support from the EPSRC, Royal Academy of Engineering, EU-H2020, Innovate UK, British Council, and industrial funding.
Title: Self-Evolution of Large Language Models
Abstract: This talk explores the emerging concept of self-evolution in large language models (LLMs), where models self-evaluate, refine, and improve their reasoning capabilities over time with minimal human intervention. I will focus on the techniques behind self-improvement, including approaches such as bootstrapped reasoning, synthesising reasoning and acting, verbalised reinforcement learning, and LLM learning via self-play or self-planning. I will also discuss the challenges in the context of LLM self-evolution and conclude with an outlook for future research.