Link: http://arxiv.org/abs/2504.04717v1
PDF Link: http://arxiv.org/pdf/2504.04717v1
Summary: Recent advancements in large language models (LLMs) have revolutionized theirability to handle single-turn tasks, yet real-world applications demandsophisticated multi-turn interactions.
This survey provides a comprehensivereview of recent advancements in evaluating and enhancing multi-turninteractions in LLMs.
Focusing on task-specific scenarios, from instructionfollowing in diverse domains such as math and coding to complex conversationalengagements in roleplay, healthcare, education, and even adversarial jailbreaksettings, we systematically examine the challenges of maintaining context,coherence, fairness, and responsiveness over prolonged dialogues.
The paperorganizes current benchmarks and datasets into coherent categories that reflectthe evolving landscape of multi-turn dialogue evaluation.
In addition, wereview a range of enhancement methodologies under multi-turn settings,including model-centric strategies (contextual learning, supervisedfine-tuning, reinforcement learning, and new architectures), externalintegration approaches (memory-augmented, retrieval-based methods, andknowledge graph), and agent-based techniques for collaborative interactions.
Finally, we discuss open challenges and propose future directions for researchto further advance the robustness and effectiveness of multi-turn interactionsin LLMs.
Related resources and papers are available athttps://github.
com/yubol-cmu/Awesome-Multi-Turn-LLMs.
Published on arXiv on: 2025-04-07T04:00:08Z