| |||||||||||||||
IASDS 2016 : 8th Workshop on Interfaces and Architectures for Scientific Data Storage | |||||||||||||||
Link: https://press3.mcs.anl.gov/iasds2016/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
[Apologies if you receive multiple copies of this message]
8th Workshop on Interfaces and Architectures for Scientific Data Storage (IASDS 2016) August 16-19, 2016, held in conjunction with ICPP 2016 in Philadelphia, USA More info at: IASDS 2016 High-performance computing simulations and large scientific experiments generate tens of terabytes of data, and these data sizes grow each year. Existing systems for storing, managing, and analyzing data are being pushed to their limits by these applications, and new techniques are necessary to enable efficient data processing for future simulations and experiments. The needs of scientific applications have triggered a continuous increase in scale of parallel systems, as demonstrated by the evolution of Top 500. However, the computation scale has not been matched by an increase in I/O scale. For example, in the design of earlier supercomputers the parallel I/O bandwidth was 1GBps for every TFLOP, whereas in the current solutions is 1GBps for every 10TFLOPS. This increased bottleneck makes even more pressing the need to employ the I/O subsystem in the most efficient solution possible. Scalable I/O has been already identified as critical for the PFLOP systems. The future exascale systems forecasted for 2018-2020 will presumably have O(1B) cores and will be hierarchical in both platform and algorithms. This hierarchy will imply a larger path in pipelining the data from cores to storage and vice-versa, exposing even more the I/O latency perceived by the current MPP architectures. This workshop will provide a forum for engineers and scientists to present and discuss their most recent work related to the storage, management, and analysis of data for scientific workloads. Emphasis will be placed on forward-looking approaches to tackle the challenges of storage at extreme scale or to provide better abstractions for use in scientific workloads. Topics of interest include, but are not limited to: • parallel file systems • scientific databases • active storage • scientific I/O middlewares • extreme scale storage • analysis of large data sets • NoSQL storage solutions and techniques • energy-aware file systems All papers will be reviewed by at least 3 independent reviewers from the international program committee. Papers will be selected based on their originality, their interest for the research community, the quality of the use-case description, the description of the technical solution, the impact of the application and/or technical description and the status of the work. All papers presented at the main conference and workshops will be submitted to IEEE Xplore for publication and EI indexing. Acceptable submissions include short papers and work-in-progress reports, as well as full papers. Important Dates Paper Submission Deadline: May 13th, 2016 Author Notification: May 27th, 2016 Final Manuscript Due: June 3rd, 2016 Organization Workshop chairs Javier Garcia Blas Computer Science and Engineering Dep Carlos III University fjblas@inf.uc3m.es Jonathan Jenkins Mathematics and Computer Science Division Argonne National Laboratory jenkins@mcs.anl.gov Program Committee Stergios V. Anastasiadis, University of Ioannina Jorge Barbosa, University of Porto Lars Ailo Bongo, University of Tromsø Jesus Carretero, Carlos III University Salvatore Distefano, Politecnico di Milano Pablo Llopis, Carlos III University Florin Isaila, Argonne National Laboratory John Jenkins, Argonne National Laboratory Helen Karatza, Aristotle University of Thessaloniki Peter Kropf, Université de Neuchâtel Julian Kunkel, DKRZ David Hart, University Corporation for Atmospheric Research Jay Lofstead, Sandia National Laboratory Ricardo Morla, University of Porto Ron Oldfield, Sandia National Laboratory Juan Antonio Rico Gallego, University of Extremadura Philip Rhodes, University of Mississippi Duan Rubing, Institute of High Performance Computing (A*start) Ali Shoker, HASLab/INESC-TEC |
|