ICML 2026 Workshop

Scalable Learning and Optimization for Efficient Multimodal AI Agents (SCALE)

10th July 2026
Collocated with ICML 2026 in Seoul, South Korea
Submit Your Paper
Scroll to explore

01 About

Agentic systems are increasingly central to high-stakes computing platforms such as AI PCs, robotics, autonomous web interaction, and software maintenance, with their performance largely determined by how effectively they manage memory and context. Enabled by multimodal foundation models, these agents can coordinate human-like reasoning through structured agentic workflows, unlocking powerful capabilities across software development, web and mobile operations, and embodied manipulation. This progress points to the broad potential of multi-agent, multimodal systems to tackle complex real-world challenges. However, realizing this potential at scale remains difficult due to fundamental limits in algorithmic reasoning, memory-driven context understanding, the need for effective test-time training and scaling, and the challenge of deploying agents efficiently across heterogeneous AI hardware where different components must run on distinct compute fabrics.

Agentic memory

Understanding and improving multi-modal agentic memory for reasoning capabilities.

Efficient agentic AI systems

Developing scalable and verifiable agentic AI systems across heterogeneous compute platforms with limited compute and memory budget.

Scaling of multi-modal agents

Understanding and improving the test-time scaling and reasoning capabilities of multi-modal agentic systems, mixture-of-agents for task scaling.

Multi-modal agents for planning

Pushing the boundaries of real life physical reasoning and planning for agentic AI.

Evaluation and benchmarking

Principled metrics and benchmarks for reasoning, memory, robustness, and efficiency in multimodal agents.

02 Call for Papers

We invite submissions exploring all aspects of multimodal reasoning in AI systems. We welcome novel algorithms, empirical studies, theoretical analyses, position papers, and work-in-progress research that advances our understanding of how AI systems reason across modalities.

Track 1

Main Track

8 pages

Maximum limit of 8 pages, excluding references and appendix. Ideal for complete and thorough work.

Important Dates

All deadlines are Anywhere on Earth (AoE) time.

24th April 2026
Paper Submission Deadline
15th May 2026
Author Notification

Topics of Interest

Memory of Agents

  • Short-, long-term, and hierarchical multimodal memory designs
  • Memory consolidation, retrieval, and forgetting for long-horizon tasks
  • Memory-grounded reasoning and cross-task knowledge reuse
  • Temporal, causal, and compositional memory abstractions
  • Robustness to noisy or missing multimodal inputs

Efficient Agentic AI Systems

  • Compute- and memory-efficient agent architectures
  • Hardware-aware co-design for heterogeneous platforms
  • Safety, verification, and controllability under constraints
  • Compression, distillation, caching, and adaptive execution
  • Modular pipelines with dynamic compute allocation

Scaling of Multimodal Agents

  • Test-time scaling laws for reasoning and planning
  • Mixture-of-agents and expert routing strategies
  • Multi-agent coordination and parallel reasoning
  • Generalization and robustness at scale
  • Trade-offs among model size, agent count, and compute

Multimodal Agents for Planning

  • Perception-memory-action integration
  • Long-horizon embodied planning and control
  • World models and simulation-based planning
  • Neuro-symbolic and interpretable planning methods
  • Planning under uncertainty and partial observability

Evaluation and Benchmarking

  • Long-context reasoning and memory benchmarks
  • Multi-step reasoning and tool-use evaluation
  • Robustness to noise and distribution shift
  • Efficiency metrics: latency, energy, memory
  • Real-world, deployment-focused evaluation protocols

Submission Guidelines

  • Format: All submissions must be in PDF format using the ICML 2026 LaTeX style file.
  • Double-Blind: The reviewing process will be double blind. Anonymize your submission.
  • Dual-Submission: We welcome ongoing and unpublished works. Papers under review elsewhere are allowed.
  • Non-Archival: The workshop is non-archival. Submissions can be submitted to other venues.
Submit on OpenReview

03 Schedule

Schedule is tentative and subject to change.
08:00 - 08:15
Opening Remarks
Welcome and Introduction
08:15 - 08:45
Invited Talk 1
Mike Shou
National University of Singapore
08:45 - 09:15
Mengdi Wang
Princeton University
09:15 - 09:45
Invited Talk 3
Mohit Bansal
University of North Carolina (UNC), Chapel Hill
09:45 - 10:15
Invited Talk 4
Le Song
GenBio AI, MBZUAI
10:15 - 10:30
Spotlight Session I
10:30 - 11:30
Morning Poster Session
11:30 - 12:30
🍽️ Lunch Break
12:30 - 13:00
Invited Talk 4
Jiao Sun
Google DeepMind
13:00 - 13:30
Invited Talk 5
James Zou
Stanford University
13:30 - 14:00
Panel Discussion
With Keynote Participants
13:30 - 14:00
Contributed talks
Spotlight Session II
14:00 - 15:15
🍽️ Coffee Break
15:15 - 15:45
Invited Talk 6
Chelsea Finn
Stanford University
15:45 - 16:15
Invited Talk 7
Minhyuk Sung
KAIST
16:15 - 16:20
Closing Remarks
Organizers
16:20 - 17:00
Poster Session
Afternoon Poster Session

04 Invited Speakers

James Zou

James Zou

Stanford University

Chelsea Finn

Chelsea Finn

Stanford University

Mengdi Wang

Mengdi Wang

Princeton University

Mike Shou

Mike Shou

National University of Singapore

Mohit Bansal

Mohit Bansal

University of North Carolina (UNC), Chapel Hill

Minhyuk Sung

Minhyuk Sung

KAIST

Jiao Sun

Jiao Sun

Google DeepMind

Le Song

GenBio AI, MBZUAI

05 Organizers

Souvik Kundu

Souvik Kundu

Intel Labs

Digbalay Bose

Digbalay Bose

Adobe Research

Sayan

Sayan Nag

Adobe Research

Manling Li

Manling Li

Northwestern University

Jaehong

Jaehong Yoon

Nanyang Technological University

Lanqing Guo

Lanqing Guo

University of Texas, Austin

Hongyi Wang

Hongyi Wang

Rutgers University, GenBio AI

Sanjoy Chowdhury

Sanjoy Chowdhury

University of Maryland, College Park

06 Sponsors

We are grateful to our sponsors for supporting this workshop.

Interested in sponsoring? Contact us

Get in Touch

For any questions, reach out to us at:

icml-scale-workshop-2026@googlegroups.com