| |||||||||||||||
DFM 2021 : International Workshop on Data Flow Models for Extreme-Scale Computing | |||||||||||||||
Link: https://ieeecompsac.computer.org/2021/dfm/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
9th IEEE International Workshop on Data Flow Models and Extreme-Scale
Computing (DFM 2020) Hosted as part of COMPSAC 2021, July 12—16, 2021, All-Virtual This workshop is organized as part of the activities of the IEEE Computer Society Dataflow STC. The ninth installment of the international workshop on Data Flow Models (DFM) for extreme-scale computing is held this year in conjunction with the COMPSAC conference. The purpose of DFM continues to being to bring together those researchers interested in novel computational models based on dataflow principles of execution. The switch to multi-core systems, at both the high-performance and embedded levels, has raised concurrency to the level of a major issue, with the trend of increasing the core count on a chip continuing, as well as energy and resiliency issues coming to the fore of major issues to tackle. Computer systems, both for high-performance and embedded computing, have now fully embraced parallelism at the hardware and software levels. From the HPC systems viewpoint, new challenges have arisen, which are common issues in the embedded world: power and energy efficiency are now major issues to be overcome when considering building efficient supercomputers. Conversely, harnessing true parallel systems is now necessary to efficiently exploit embedded systems equipped with multiple cores. Moreover, fault-tolerance and resiliency must also be taken into consideration, at both the hardware and software level. Finally, many such systems (both embedded and HPC) are networked together, forming extremely large distributed and parallel systems. Dataflow-inspired models of computation, once discarded by the sequential programming crowd, are again considered serious contenders to help increase programmability, performance, and scalability in highly parallel and extreme scale systems. By their very nature, dataflow and event-driven inspired models tend to naturally solve (if only partially) some of the newer problems related to power and energy efficiency, or provide fertile ground to help with implementing efficient fault-tolerance and resiliency mechanisms, as many of the required properties are enmeshed in the models themselves. Yet, to achieve high scalability and performance, modern computing systems, both HPC and embedded, rely on heterogeneous means to carry out computations: GPUs, FPGAs, etc. Meanwhile, legacy programming and execution models, such as MPI and OpenMP, add asynchronous and data-driven constructs to their models, all the while trying to take into account the very complex hardware targeted by parallel applications. Consequently, programming and execution models, trying to combine both legacy control flow-based and data flow-based aspects of computing, have also become increasingly complex to handle. Developing new models and their implementation, from the application programmer level, to the system level, down to the hardware level is key to provide better data- and event-driven systems which can efficiently exploit the wealth of diversity that composes current high-performance systems, for extreme scale parallel computing. To this end, the whole stack, from the application programming interface down to the hardware must be investigated for programmability, performance, scalability, energy and power efficiency, as well as resiliency and fault-tolerance. All these aspects may have a different impact on high-performance computing and embedded systems. Researchers and practitioners all over the world, from both academia and industry, working in the areas of language, system software, and hardware design, parallel computing, execution models, and resiliency modeling are invited to discuss state of the art solutions, novel issues, recent developments, applications, methodologies, techniques, experience reports, and tools for the development and use of data flow models of computation. Topics of interest include, but are not limited to, the following: DFM 2021 solicits novel papers that include but are not limited to: • Programming languages and compilers for existing and new languages — in particular single-assigned and functional languages • System software: Operating systems, runtime systems • Hardware design: ASICs and reconfigurable computing (FPGAs) • Resiliency and fault-tolerance for parallel and distributed systems • New data flow inspired execution models — in particular strict and non-strict models • Hybrid system design for control-flow and data-flow based systems • Applications and modeling for IoT and Edge Computing systems • Position papers on the future of data flow in the era of parallel and distributed many-core systems, and beyond, including heterogeneous systems SUBMISSION INFORMATION DFM 2021 will accept both full (6 pages) and short papers (4 pages). Full page papers may go up to 8 pages for a fee. Papers should be prepared using the IEEE Proceedings format; Short Papers could be submitted in the form of extended abstracts. All accepted papers will appear in the Computer Society Digital Library. Submission site https://easychair.org/my/conference.cgi?welcome=1;conf=compsac2021. IMPORTANT DATES Workshop papers due: 21 April 2021 Workshop paper notifications: 15 May 2021 Camera-ready and registration due: 31 May 2021 PROGRAM COMMITTEE Stéphane Zuckerman CY Cergy Paris Université Erik Altman IBM Albert Cohen Google John Feo Pacific Northwest National Laboratory Guang R. Gao University of Delaware Jean-Luc Gaudiot University of California, Irvine Roberto Giorgi University of Sienna Sven-Bodo Scholz Heriot-Watt University Arthur Stoutchinin ST Microelectronics |
|