| |||||||||||||||
BigDataCloud 2015 : 4th Workshop on Big Data Management in Clouds | |||||||||||||||
Link: http://www.irisa.fr/kerdata/bigdatacloud/2015/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
The fourth edition of the Workshop on Big Data Management in Clouds will be held in Vienna, Austria. BigDataCloud 2015 follows the successful previous editions held in conjunction with EuroPar. Its goal is to aggregate the data management and Clouds / Grids / P2P communities in order to complement the Big Data handling issues with a comprehensive system / infrastructure perspective.
As data volumes increase at exponential speed in more and more application fields of science, the challenges posed by handling Big Data gain an increasing importance. Large scientific experiments, such as climate modelling, genome mapping, and high-energy physics simulations generate data volumes reaching petabytes per year, further used for real-time or offline processing. Initially designed for powerful and expensive supercomputers, such applications have seen an increasing adoption on clouds, exploiting their elasticity and economical model. However, running such applications in an efficient fashion on clouds is challenging. One such open challenge is how to handle this “data deluge”. Sharing, disseminating and analyzing large data sets has become a critical issue despite the deployment of petascale computing systems, and optical networking speeds reaching up to 100 Gbps. While Map/Reduce covers a large fraction of the development space, there are still many applications that are better served by other models and systems. In such a context, we need to embrace new programming models, scheduling schemes, hybrid infrastructures and scale out of single datacenters to geographically distributed deployments in order to cope with these new challenges effectively. The BigDataCloud workshop provides a platform for the dissemination of recent research efforts that explicitly aim at addressing these challenges. It supports the presentation of advanced solutions for the efficient management of Big Data in the context of Cloud computing, new development and deployment efforts in running data-intensive computing workloads. In particular, we are interested in how the use of Cloud-based technologies can meet the data intensive scientific challenges of HPC applications that are not well served by the current supercomputers or grids, and are being ported to Cloud platforms. The goal of the workshop is to support the assessment of the current state, introduce future directions, and present architectures and services for future Clouds supporting data intensive computing. The BigDataCloud workshop calls for contributions that address fundamental research and system issues in Cloud data management including but not limited to the following: Cloud storage architectures for Big Data Reliability of data intensive applications and services running on the Cloud Query processing and indexing in Cloud computing systems Data privacy and security in Clouds Data-intensive computing on hybrid infrastructures (Grids/Clouds/P2P) Cloud storage resource management Data-intensive Cloud-based applications Content delivery networks using storage Clouds Data intensive scalable computing on Clouds Data management within and across multiple geographically distributed data centers Data handling in MapReduce based computations Data management in HPC Clouds Advanced programming models for IaaS, PaaS and SaaS Elasticity for Cloud data management systems Self-* and adaptive mechanisms. Many-Task Computing in the Cloud Performance evaluation of Cloud environments and technologies Event streaming and real-time processing on Clouds Energy-efficiency for BigData in Clouds |
|