posted by organizer: szuckerman || 2356 views || tracked by 3 users: [display]

DFM 2022 : International Workshop on Data-Flow Models for extreme-scale computing

FacebookTwitterLinkedInGoogle

Link: https://ieeecompsac.computer.org/2022/dfm-2022/
 
When Jun 27, 2022 - Jul 1, 2022
Where Torino, Italy, and virtually
Submission Deadline Apr 7, 2022
Notification Due Apr 30, 2022
Final Version Due May 15, 2022
Categories    dataflow   HPC   computer architecture   parallel computing
 

Call For Papers


***********************************************************************
* 12th IEEE International Workshop on Data Flow Models *
* and Extreme-Scale Computing (DFM 2022) *
***********************************************************************
* Hosted as part of COMPSAC 2021, June 27—July 1, 2022 *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Note: depending on the sanitary context, the event may be held in *
* a hybrid fashion. *
***********************************************************************

************************************************************************
* The DFM workshop aims to highlight advancements to event-driven and *
* data-driven models of computation for extreme scale computing, *
* parallel and distributed computing for high-performance computing, *
* and high-end embedded and cyber-physical systems. It also aims at *
* fostering exchanges between dataflow practitioners, both at the *
* theoretical and practical levels. *
************************************************************************

***********************************************************************
* Important Dates *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Workshop papers due: 7 April, 2022 *
* Workshop paper notifications: 15 May, 2022 *
***********************************************************************

***********************************************************************
* With the advent of true many-core systems, it has become *
* unreasonable to solely rely on control-based parallel models of *
* to achieve high scalability. Dataflow-inspired models of *
* computation, once discarded by the sequential programming crowd, *
* are once again considered serious contenders to help increase *
* performance, and scalability in highly parallel and extreme scale *
* programmability, systems, but also power and energy efficiency, as *
* partially) relieve the parallel application programmer from they *
* (at least performing tedious and perilous synchronization *
* bookkeeping, but also provide clear scheduling points for the *
* system software and hardware. They are also an invaluable tool for *
* high-end embedded computing to deal with real-time constraints. *
* However, to reach such high scalability levels, extreme scale *
* systems rely on heterogeneity, hierarchical memory subsystems, etc. *
* Meanwhile, legacy programming and execution models, such as MPI and *
* OpenMP, add asynchronous and data-driven constructs to their *
* models, all the while trying to take into account the very *
* complex hardware targeted by parallel applications. Consequently, *
* programming and execution models, trying to combine both legacy *
* control flow-based and dataflow-based aspects of computing, have *
* also become increasingly complex to handle. Developing new models *
* and their implementation, from the application programmer level, to *
* the system level, down to the hardware level is key to provide *
* better data- and event-driven systems which can efficiently *
* exploit the wealth of diversity that composes current * * high-performance systems, for extreme scale parallel computing. *
* To this end, the whole stack, from the application programming *
* interface down to the hardware must be investigated for *
* programmability, performance, scalability, energy and power *
* efficiency, as well as resiliency and fault-tolerance. *
* *
* Researchers and practitioners all over the world, from both *
* academia and industry, working in the areas of language, system *
* software, and ardware design, parallel computing, execution models, *
* hand resiliency modeling are invited to discuss state of the art *
* solutions, novel issues, recent developments, applications, *
* methodologies, techniques, xperience reports, and tools for the *
* development and use of dataflow models of computation. Topics of *
* interest include, but are not limited to, the following: *
* *
* - Programming languages and compilers for existing and new *
* languages—in particular single-assigned and functional languages *
* *
* - System software: Operating systems, runtime systems *
* *
* - Hardware design: ASICs and reconfigurable computing (FPGAs) *
* *
* - Resiliency and fault-­tolerance for parallel and distributed *
* systems *
* *
* - New dataflow inspired execution models — in particular strict and *
* non-strict models *
* *
* - Hybrid system design for control-flow and data-flow based systems * * *

* - dataflow-based AI architectures and accelerators *
* *
* - Dataflow-inspired optimizations to ML frameworks, graphs, etc. * * *
* - Position papers on the future of dataflow in the era of many-core *
* systems and beyond *
***********************************************************************



***********************************************************************
* Important Dates *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* *
* Workshop papers due: 7 April, 2022 *
* *
* Workshop paper notifications: 15 May, 2022 *
***********************************************************************

*********************************************************************** * Authors are invited to submit original, unpublished research work, *
* as well as industrial practice reports. Simultaneous submission to *
* other publication venues is not permitted. In accordance with IEEE *
* policy, submitted manuscripts will be checked for plagiarism. *
* Instances of alleged misconduct will be handled according to the *
* IEEE Publication Services and Product Board Operations Manual. *
* *
* Please note that in order to ensure the fairness of the review *
* process, COMPSAC follows the double-blind review procedure. *
* Therefore we kindly ask authors to remove their names, affiliations *
* and contacts from the header of their papers in the review version. *
* Please also redact all references to authors’ names, affiliations *
* or prior works from the paper when submitting papers for review. *
* Once accepted,authors can then include their names, affiliations *
* and contacts in the camera-ready revision of the paper, and put the *
* references to their prior works back. *
***********************************************************************

***********************************************************************
* Formatting *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* *
* Workshop papers are limited to 6 pages. Page limits are inclusive *
* of tables, figures, appendices, and references. Workshop papers can *
* add an additional 2 pages with additional page charges *
* ($250USD/page). *
***********************************************************************

Related Resources

SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
OpenSuCo @ ISC HPC 2017   2017 International Workshop on Open Source Supercomputing
ACM SAC 2025   40th ACM/SIGAPP Symposium On Applied Computing
BDAB 2025   6th International Conference on Big Data and Blockchain
BDE 2025   2025 7th International Conference on Big Data Engineering (BDE 2025)
PDP 2025   Parallel, Distributed and Network-Based Processing
ACDL 2025   8th Advanced Course on Data Science & Machine Learning
ACSTY 2025   11th International Conference on Advances in Computer Science and Information Technology
ICIME 2025   2025 13th International Conference on Information Management and Engineering (ICIME 2025)
GPGPU 2025   General Purpose Processing on Graphics Processing Units