posted by organizer: michelataufer1 || 111 views || tracked by 3 users: [display]

PACT 2026 : ACM International Conference on Parallel Architectures and Compilation Techniques (PACT)

FacebookTwitterLinkedInGoogle


Conference Series : International Conference on Parallel Architectures and Compilation Techniques
 
Link: https://pact2026.github.io/
 
When Oct 19, 2026 - Oct 22, 2026
Where Chicago, Illinois, USA
Abstract Registration Due Apr 17, 2026
Submission Deadline Apr 24, 2026
Notification Due Aug 5, 2026
Final Version Due Oct 2, 2026
Categories    computer systems   compilers   computer architeture   high perfomance computing
 

Call For Papers

Call for Papers

PACT 2026 will be held in Chicago, IL, USA, from October 19–22, 2026.

Abstracts Due: April 17, 2026
Papers Due: April 24, 2026

Submission Site: https://pact26.hotcrp.com
Scope

The International Conference on Parallel Architectures and Compilation Techniques (PACT) is a unique technical conference at the intersection of hardware and software, with a special emphasis on parallelism. PACT brings together researchers from computer architectures, compilers, execution environments, programming languages, and applications to present and discuss their latest research results, tools, and practical experiences. This year, PACT is specifically committed to pioneering AI-centric computing, seeking research that redefines the performance, scalability, and efficiency of large-scale AI workloads across diverse parallel and heterogeneous platforms.

PACT 2026 will be held as an in-person event in Chicago, IL. USA. We encourage all authors of accepted papers to participate, and at least one author must attend the conference.

PACT seeks submissions in two categories:

Research Papers
Tools and Practical Experience (TPE) Papers

Topics of Interest

PACT welcomes submissions on topics including, but not limited to:

Parallel architectures, including accelerators for AI and other domains
Conventional parallel architectures (e.g., multicore, multithreaded, superscalar, and VLIW architectures) and heterogeneous architectures
AI accelerators: design of specialized hardware for LLM inference and training (e.g., TPUs, NPUs, and custom silicon)
In-memory & near-data processing: architectures to mitigate the “memory wall” in massive AI model parameters
Heterogeneous systems: integration of CPUs, GPUs, and FPGAs for distributed AI workloads
Scalable AI Infrastructure: Architecture support for multi-node, multi-GPU clusters and high-speed interconnects for LLM scaling
Compilers and tools for parallel architectures
Conventional compilers and tools for parallel and heterogeneous architectures
Dynamic translation and optimization
ML compilers: automated optimization, kernel fusion, and code generation for ML frameworks
LLMs for compilation: using AI to automate parallelization, loop transformations, and autotuning
Dynamic optimization: runtime systems for adaptive AI model execution and sparse computation
Quantization & compression: compiler-assisted techniques for model pruning and low-precision arithmetic
Middleware and runtime system support for parallel computing
Resource management & scheduling
Communication & synchronization
Energy-aware middleware
Quantum-HPC interfacing
Serverless parallel computing (e.g., AWS Lambda)
AI & LLM-specific runtime support, including distributed inference & training, KV cache management, and computation-communication overlap
I/O issues in parallel computing and their application impact
Data loading & preprocessing pipelines
Metadata scalability
Memory-storage convergence
Large-scale data processing for AI models and applications
Hardware and software resilience & fault tolerance
Checkpointing & restart
Silent data corruption detection
Self-healing runtimes
Applications and experimental studies of parallel processing, especially using AI models
Parallel programming languages, algorithms, and applications
Computational models for concurrent execution
Compiler and hardware support for parallel applications
Support for correctness in hardware and software
Reconfigurable parallel computing

Research Papers

Research papers will be evaluated by the PACT Program Committee based on:

Relevance: The paper should align with PACT’s topics of interest.
Novelty/Originality: The work should present new ideas or offer fresh perspectives.
Significance: The research should address an important problem and have the potential to influence future work.
Results: The claims should be well-supported by clear and validated results.
Comparison to Prior Work: The paper should properly discuss existing literature, highlighting similarities, differences, and improvements.

Tools and Practical Experience (TPE) Papers

TPE papers focus on practical applications, industry challenges, and experience reports. A TPE paper must clearly explain its functionality, summarize practical experience with realistic case studies, and describe any supporting artifacts. The title of a TPE paper must include the prefix “TPE:”. TPE papers follow the same submission guidelines and are reviewed by the same Program Committee as research papers.

TPE papers will be evaluated based on:

Originality: They should present PACT-related technologies applied to real-world problems.
Usability: The tool or software should have broad applicability and aid PACT-related research.
Documentation: The tool/software should be well-documented on a public website.
Benchmark Repository: A benchmark suite should be provided for testing.
Availability: Preference is given to tools/software that are freely available, though industry/commercial tools may be considered with justification.
Foundations: The paper should relate to PACT’s principles, though extensive theoretical discussion is not required.

Submission Guidelines

Submissions are due April 24, 2026, via the conference submission site: https://pact26.hotcrp.com. Ensure that your submission meets the following requirements:

Format: Papers are limited to 10 pages (excluding references) in ACM 8.5” x 11” format, double-column, 9pt font (e.g., using the sigconf LaTeX template). The text box must not exceed 7.15” x 9” (18.2cm x 22.9cm). Templates are available on the ACM Author Gateway.
Abstract: Papers must include an abstract of under 300 words.
Originality: Submissions must contain original material not previously published or under review elsewhere. Material presented at workshops without copyrighted proceedings may be submitted.
TPE Papers: Must be prefixed with “TPE:” in the title.
Double-Blind Review: The review process is double-blind to prevent bias. Submissions must not include author names, affiliations, or self-references that reveal authorship. Prior work by the authors must be cited in the third person.
Legibility: Figures and graphs must be readable without magnification.
Submission Format: Papers must be submitted in PDF format.
Supplementary Material: A single anonymized PDF may be uploaded with additional proofs, results, or datasets. Reviewers are not required to consult supplementary material.

Posters:

Poster submissions must follow the same formatting guidelines but are limited to 2 pages.
Papers not accepted for full presentation will automatically be considered for posters unless authors opt out in their abstract submission.
Two-page poster summaries will be included in the conference proceedings.

Conflicts of Interest

Authors must declare all conflicts of interest with PC members and external reviewers at submission time. Papers with undeclared or false conflicts may be rejected. Conflicts follow ACM’s Conflict of Interest Policy.
Artifact Evaluation

Authors of accepted papers are encouraged to submit their artifacts for evaluation. The Artifact Evaluation Committee assesses availability, functionality, and reproducibility. Successful artifacts will receive a seal of approval in the published paper. Authors can include a 2-page Artifact Appendix in the final paper.

We encourage authors to use open-source frameworks such as Docker, OCCAM, reprozip, CodeOcean, and Collective Knowledge to improve artifact portability and reproducibility.
Camera-Ready Instructions

Page Limit: The final version must not exceed 11 pages, with an optional 2-page Artifact Appendix.
Extra Pages: Up to 2 additional pages may be purchased for $200 per page.

Important Dates

Abstract Submission Deadline: April 17, 2026
Paper Submission Deadline: April 24, 2026
Rebuttal Period: July 12-16, 2026
Author Notification: August 5, 2026
Artifact Submission: August 10, 2026
Camera-Ready Deadline: October 2, 2026

All deadlines are firm at midnight anywhere on Earth (AoE).
Code of Conduct

All participants must adhere to:

ACM Code of Ethics and Professional Conduct
IEEE Code of Ethics and IEEE Code of Conduct
ACM Policy Against Harassment

Publication Policies

PACT is supported by ACM and IEEE. Accepted papers will be published in both the ACM Digital Library and IEEE Xplore. By submitting a paper, authors agree to comply with all ACM and IEEE publication policies.

All authors must obtain an ORCID ID to complete the publication process. ORCID improves author discoverability, proper attribution, and name normalization.

We look forward to your submissions!

Related Resources

OpenSuCo @ ISC HPC 2017   2017 International Workshop on Open Source Supercomputing
Ei/Scopus-CMLDS 2026   2026 3rd International Conference on Computing, Machine Learning and Data Science (CMLDS 2026)
LCTES 2026   The 27th ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems
ACM ICCAI 2026   ACM--2026 12th International Conference on Computing and Artificial Intelligence (ICCAI 2026)
CVIPPR 2026   2026 4th Asia Conference on Computer Vision, Image Processing and Pattern Recognition (CVIPPR 2026)
ACM GoodIT 2026   ACM 6th International Conference on Information Technology for Social Good
ACM ICIT 2026   ACM--2026 The 14th International Conference on Information Technology: IoT and Smart City (ICIT 2026)
ACM BDIOT 2026   ACM--2026 10th International Conference on Big Data and Internet of Things (BDIOT 2026)
ACM HP3C 2026   ACM--2026 10th International Conference on High Performance Compilation, Computing and Communications (HP3C 2026)