MemoryArena Logo MemoryArena: Benchmarking Agent Memory in Interdependent Multi-Session Agentic Tasks

1Stanford University 2University of California, San Diego 3University of Illinois Urbana-Champaign 4Princeton University 5University of Pittsburgh 62077AI

*Equal contribution   Corresponding author: zexueh@stanford.edu

MemoryArena Teaser Figure

MemoryArena: A Benchmark for Agent Memory in Multi-Session Agentic Tasks

Abstract

Existing evaluations of agents with memory typically assess **memorization** and **action** in isolation. One class of benchmarks evaluates memorization by testing recall of past conversations or text but fails to capture how memory is used to guide future decisions. Another class focuses on agents acting in single-session tasks without the need for long-term memory. However, in realistic settings, memorization and action are tightly coupled: agents acquire memory while interacting with the environment, and subsequently rely on that memory to solve future tasks.

To capture this setting, We introduce MemoryArena, a unified evaluation gym for benchmarking agent memory in multi-session Memory-Agent-Environment loops. The benchmark consists of human-crafted agentic tasks with explicitly interdependent subtasks, where agents must learn from earlier actions and feedback by distilling experiences into memory, and subsequently use that memory to guide later actions to solve the overall task. MEMORYARENA supports evaluation across web navigation, preference-constrained planning, progressive information search, and sequential formal reasoning, and reveals that agents with near-saturated performance on existing long-context memory benchmarks like LoCoMo perform poorly in our agentic setting, exposing a gap in current evaluations for agents with memory.

More content coming soon...

Data, results, and additional evaluate resources will be released soon.