posted by user: freddyg || 79 views || tracked by 1 users: [display]

ReLAMP 2025 : Efficient Microarchitectures for Resilient Large Model Processing Workshop

FacebookTwitterLinkedInGoogle

Link: https://freddygabbay.github.io/ReLAMP-MICRO/
 
When Oct 18, 2025 - Oct 18, 2025
Where Seoul, South Korea
Submission Deadline Aug 31, 2025
Notification Due Sep 15, 2025
Final Version Due Sep 30, 2025
Categories    microarchitecture    generative ai   large language models   approximate computing
 

Call For Papers

About the Workshop

The rapid evolution of Large Language Models (LLMs) and the emergence of Large Multimodal Models (LMMs) are revolutionizing various domains. Simultaneously, the pursuit of very long-context LLMs (e.g., 1M context length) is pushing the boundaries of what these models can achieve. However, the immense computational, memory, and power requirements of these advanced models present formidable challenges to current hardware and system designs.

Fortunately, large models, including LLMs, LMMs, and those handling extended contexts, often exhibit inherent resiliency to noise and approximation. This workshop aims to harness this property by exploring microarchitectural innovations and system-level techniques that exploit such resiliency to significantly improve performance, power efficiency, and memory utilization. Our focus will extend beyond traditional LLMs to encompass the unique challenges and opportunities presented by multimodal data and extremely long contexts.

Topics will include, but are not limited to, approximate computing, dynamic quantization, and adaptive methods that apply different levels of approximation or quantization across layers within these complex models. Additionally, the workshop will address critical memory efficiency concerns through novel data compression techniques that leverage model resiliency to reduce memory footprint, especially crucial for LMMs and long-context LLMs, while maintaining or even improving model performance.

By bringing together researchers, practitioners, and industry experts, the ReLAMP workshop seeks to foster discussions and drive advancements in efficient microarchitectures and systems for the next generation of large model processing. This will pave the way for more sustainable, scalable, and capable AI solutions.

Important Dates

Paper Submission Deadline: 31 August 2025
Notification of Acceptance: 15 September 2025
Camera-Ready Submission: 30 September 2025
Workshop Date: 18 October 2025
Topics of Interest

We invite submissions on a wide range of topics related to efficient processing and memory optimization for Large Language Models (LLMs), Large Multimodal Models (LMMs), and very long-context LLMs, including but not limited to:

Microarchitectures for power-efficient processing of LLMs, LMMs, and long-context LLMs.
Approximate computing techniques for large models, including structured and unstructured sparsity.
Dynamic quantization and mixed-precision approaches tailored for diverse layers in LLMs, LMMs, and models with extended contexts.
Adaptive approximation methods leveraging the unique characteristics of LLM and LMM layers, and long-context attention mechanisms.
Techniques for leveraging model resiliency to enhance memory efficiency in LLMs, LMMs, and long-context LLMs.
Novel data compression methods for reducing the memory footprint of large models, especially considering multimodal data and massive context windows.
Hardware-software co-design for optimized processing of LLMs, LMMs, and very long-context LLMs.
Trade-offs between accuracy, power, performance, and context length in large model optimization.
Case studies and benchmarks for efficient processing of LLMs, LMMs, and extreme long-context models.
Emerging technologies and memory solutions for sustainable deployment of next-generation large models.
Submission Guidelines

ReLAMP welcomes submissions of short papers, up to 3 pages excluding references, using a double-column format. You can use this Latex template.

Submissions should clearly state the research problem, motivation, and technical contribution. All submissions must be in English.
Please submit your paper as a single PDF file.
Papers can present work in progress, exploratory/preliminary research, or already published work.
Submissions will be assessed based on their novelty, technical quality, potential impact, interest, clarity, relevance, and reproducibility.
Reviews will not be blind, so please submit without anonymizing authors in the submitted PDF.
There will be no formal proceedings, allowing authors the flexibility to extend and publish their work in other conferences and journals.
For each accepted paper, at least one author must attend the workshop and present the paper.

Related Resources

HLLMICDT 2025   Harnessing Large Language Models for Innovations in Cancer Diagnosis and Treatment
AIMS 2025   2025 Artificial intelligence Models and Systems Symposium
LLM4SE 2025   1st Workshop on Large Language Models for Generative Software Engineering
MIWAI 2025   18th Multi-Disciplinary International Conference on Artificial Intelligence
L2M 2025   First Large Language Model Memorization (L2M2) workshop @ACL 2025
MAS-GAIN 2025   1st International Workshop on Multi-Agent Systems using Generative Artificial INtelligence for Automated Software Engineering
IEILM 2025   The 2nd Workshop on Integrating Edge Intelligence and Large Model in Next Generation Networks
OCTA 2025   International Multi-Conference OCTA’2025 on Organization of Knowledge and Advanced Technologies
Ei/Scopus-SGGEA 2025   2025 2nd Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2025)
CSTFM 2025   2025 International Conference on Smart Transportation and Future Mobility (CSTFM 2025)