“A comprehensive survey reveals how memory mechanisms have become central to LLM-based agents, bridging operating system engineering and cognitive science perspectives. This unified framework is essential for developing more capable AI systems that can learn, retain, and leverage information effectively over time.”
Key Takeaways
- Memory mechanisms are the architectural cornerstone of modern LLM-based agents integrating external tools.
- Current research remains fragmented between engineering and cognitive science approaches to agent memory.
- A unified theoretical framework for memory evolution is needed to advance LLM agent capabilities.
Memory mechanisms emerge as the critical foundation for next-generation LLM agents.
trending_upWhy It Matters
Memory mechanisms directly impact how well LLM agents can perform complex, multi-step tasks and learn from interactions. As these agents become more prevalent in real-world applications, understanding and optimizing their memory architectures is crucial for improving reliability, efficiency, and practical utility. This survey bridges a critical gap in AI research by synthesizing disparate approaches into a coherent evolutionary perspective.



