IPDPS: International Parallel and Distributed Processing Symposium

FacebookTwitterLinkedInGoogle

 

Past:   Proceedings on DBLP

Future:  Post a CFP for 2027 or later

 
 

All CFPs on WikiCFP

Event When Where Deadline
IPDPS 2026 40th IEEE International Parallel & Distributed Processing Symposium
May 25, 2026 - May 29, 2026 New Orleans, USA Oct 9, 2025 (Oct 2, 2025)
IPDPS 2024 International Parallel and Distributed Processing Symposium
May 27, 2024 - May 31, 2024 San Francisco Oct 5, 2023 (Sep 28, 2023)
IPDPS 2023 International Parallel and Distributed Processing Symposium
May 15, 2023 - May 19, 2023 St. Petersburg, Florida Oct 6, 2022 (Sep 29, 2022)
IPDPS 2022 International Parallel and Distributed Processing Symposium
May 30, 2022 - Jun 3, 2022 Lyon, France Oct 8, 2021 (Oct 1, 2021)
IPDPS 2021 35th IEEE International Parallel & Distributed Processing Symposium
May 17, 2020 - May 21, 2020 Portland, Oregon Oct 12, 2020 (Oct 5, 2020)
IPDPS 2019 33rd IEEE International Parallel & Distributed Processing Symposium
May 20, 2019 - May 24, 2019 Rio de Janeiro, Brazil Oct 15, 2018 (Oct 8, 2018)
IPDPS 2017 31st IEEE International Parallel & Distributed Processing Symposium
May 29, 2017 - Jun 2, 2017 Orlando, Florida USA Oct 23, 2016 (Oct 18, 2016)
IPDPS 2016 IEEE International Parallel and Distributed Processing Symposium
May 23, 2016 - May 27, 2016 Chicago, IL, USA Oct 16, 2015 (Oct 9, 2015)
IPDPS 2013 27th IEEE International Parallel & Distributed Processing Symposium
May 20, 2013 - May 24, 2013 Boston, Massachusetts USA Oct 1, 2012 (Sep 24, 2012)
IPDPS 2011 IEEE International Symposium on Parallel & Distributed Processing
May 16, 2011 - May 22, 2011 Anchorage, USA Oct 1, 2010
IPDPS 2010 IEEE International Parallel & Distributed Processing Symposium
Apr 19, 2010 - Apr 23, 2010 Atlanta, Georgia Sep 28, 2009 (Sep 21, 2009)
IPDPS 2009 International Parallel and Distributed Processing Symposium
May 25, 2009 - May 29, 2009 Rome Oct 3, 2008
IPDPS 2008 22nd IEEE International Parallel and Distributed Processing Symposium
Apr 14, 2008 - Apr 18, 2008 Miami, FL, USA Oct 8, 2007
 
 

Present CFP : 2026

Authors are invited to submit manuscripts that present novel and impactful research in high performance computing (HPC) in parallel and distributed processing. Works focusing on emerging technologies, interdisciplinary work spanning multiple IPDPS focus areas, and novel open-source artifacts are welcome. Topics of interest include but are not limited to the following areas:

Algorithms:
This track focuses on algorithms for computational and data science in parallel and distributed computing environments (including cloud, edge, fog, distributed memory, and accelerator-based computing). Examples include structured and unstructured mesh and meshless methods, dense and sparse linear algebra computations, spectral methods, n-body computations, clustering, data mining, compression, and combinatorial algorithms such as graph and string algorithms. Also included in this track are algorithms that apply to tightly or loosely coupled systems, such as those supporting communication, synchronization, power management, distributed resource management, distributed data and transactions, and mobility. Novel algorithm designs and implementations tailored to emerging architectures (such as ML/AI accelerators or quantum computing systems) are also included.

Applications:
This track focuses on real-world applications (combinatorial, scientific, engineering, data analysis, and visualization) that use parallel and distributed computing concepts. Papers submitted to this track are expected to incorporate innovations that originate in specific target application areas, and contribute novel methods and approaches that address core challenges in their scalable implementation. Contributions include the design, implementation, and evaluation of parallel and distributed applications, including implementations targeting emerging architectures (such as ML/AI accelerators) and application domain advances enabled by ML/AI.

Architecture:
This track focuses on existing and emerging architectures for high performance computing, including architectures for instruction-level and thread-level parallelism; manycore, multicore, accelerator, domain-specific and special-purpose architectures (including ML/AI accelerators); reconfigurable architectures; memory technologies and hierarchies; volatile and non-volatile emerging memory technologies; co-design paradigms for processing-in-memory architectures; solid-state devices; exascale system designs; data center and warehouse-scale architectures; novel big data architectures; network and interconnect architectures; emerging technologies for interconnects; parallel I/O and storage systems; power-efficient and green computing systems; resilience, security, and dependable architectures; and emerging architectural principles for machine learning, approximate computing, quantum computing, neuromorphic, analog, and bio-inspired computing.

Machine Learning and Artificial Intelligence (ML/AI):
This track focuses on all areas of ML/AI that are relevant to parallel and distributed computing, including ML/AI training on resource-limited platforms; computational optimization methods for AI such as pruning, quantization and knowledge distillation; parallel and distributed learning algorithms; energy-efficient methods for ML/AI; federated learning; design and implementation of ML/AI algorithms on parallel architectures (including distributed memory, GPUs, tensor cores and emerging ML/AI accelerators); new ML/AI methods benefitting HPC applications or HPC system management; and design and development of ML/AI software pipelines (e.g., frameworks for distributed training, integration of compression into ML/AI pipelines, compiler techniques and DSLs). Papers submitted to the ML/AI track should emphasize new ML/AI technology that is best reviewed by ML/AI experts. Papers that emphasize core parallel computing topics applied to ML/AI workloads or applications benefitting from use of existing ML/AI tools should be submitted to the topic domain tracks rather than this ML/AI track.

Measurements, Modeling, and Experiments:
This track focuses on experiments and performance-oriented studies in the practice of parallel and distributed computing. “Performance” may be construed broadly to include metrics related to time, energy, power, accuracy, and resilience, for instance. Topics include methods, experiments, and tools for measuring, evaluating, and/or analyzing performance for large-scale applications and systems; design and experimental evaluation of applications of parallel and distributed computing in simulation and analysis; experiments on the use of novel commercial or research accelerators and architectures, including quantum, neuromorphic, and other non-Von Neumann systems; innovations made in support of large-scale infrastructures and facilities; and experiences and methods for allocating and managing system and facility resources.

Programming Models, Compilers, and Runtime Systems:
This track covers topics ranging from the design of parallel programming models and paradigms to languages and compilers supporting these models and paradigms to runtime and middleware solutions. Software that is close to the application (as opposed to the bare hardware) but not specific to an application is included. Examples include frameworks targeting cloud and distributed systems; application frameworks for fault tolerance and resilience; software supporting data management, scalable data analytics and similar workloads; and runtime systems for future novel computing platforms including quantum, neuromorphic, and bio-inspired computing. Novel compiler techniques and frameworks leveraging machine learning methods are included in this track.

System Software:
This track focuses on software that is close to the bare high performance computing (HPC) hardware. Topics include storage and I/O systems; system software for resource management, job scheduling, and energy-efficiency; system software support for accelerators and heterogeneous HPC computing systems; interactions between the operating system, hardware, and other software layers; system software solutions for ML/AI workloads (e.g., energy-efficient software methods for ML/AI); system software support for fault tolerance and resilience; containers and virtual machines; specialized operating systems and related support for high-performance computing; system software for future novel computing platforms including quantum, neuromorphic, and bio-inspired computing; and system software advances enabled by ML/AI.

 

Related Resources

OpenSuCo @ ISC HPC 2017   2017 International Workshop on Open Source Supercomputing
PDP 2026   Parallel, Distributed, and Network-Based Processing
IEEE AIxVR 2026   8th International Conference on Artificial Intelligence & extended and Virtual Reality
ITE 2025   6th International Conference on Integrating Technology in Education (ITE 2025)
Ei/Scopus-MLBDM 2025   2025 5th International Conference on Machine Learning and Big Data Management (MLBDM 2025)
PCDS 2025   The 2nd International Symposium on Parallel Computing and Distributed Systems
AGRIJ 2025   Agricultural Science: An International journal
PDCTA 2025   14th International Conference on Parallel, Distributed Computing Technologies and Applications
ICCCAS 2026   2026 IEEE the 15th International Conference on Communications, Circuits, and Systems (ICCCAS 2026)
UBIC 2025   16th International Conference on Ubiquitous Computing