posted by organizer: dirk || 3288 views || tracked by 2 users: [display]

MMAutomotive 2018 : Multimodal Interaction in Automotive Applications

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/multimodalautomotive
 
When N/A
Where N/A
Submission Deadline Feb 5, 2018
Notification Due Mar 15, 2018
Final Version Due Apr 28, 2018
Categories    automotive   multimodality   interaction
 

Call For Papers

Multimodal Interaction in Automotive Applications
=================================================

With the smartphone becoming ubiquitous, pervasive distributed computing is becoming a reality. Increasingly, aspects of the internet of things find their way into many aspects of our daily lives. Users are interacting multimodally with their smartphones and expectations with regard to natural interaction have increased dramatically in the past years. Even more, users have started to project these expectations towards all kind of interfaces encountered in their daily lives. Currently, these expectations are not yet fully met by car manufacturers since the automotive development cycles are still much longer compared to software industry. However, the clear trend is that manufacturers add technology to cars to deliver on their vision and promise of a safer drive. Multiple modalities are already available in today’s dashboards, including haptic controllers, touch screens, 3D gestures, voice, secondary displays, and gaze.
In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs to get the job done. For instance, such an assistant can naturally answer any question about the car and help schedule service when needed. It can find the preferred gas station along the route, or even better – plan a stop and ensure to arrive in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Moreover, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.

This is why the biggest innovations in today’s cars happened in the way we interact with the integrated devices such as the infotainment system. For instance, it has been shown that voice based interaction is less distractive than interaction with visual haptic interface, but it is only one piece in the way we interact multimodally in today’s cars, shifting away from the GUI as the only source of interaction. This also leads to additional efforts to establish a mental model for the user. With the plethora of available modalities requiring multiple mental maps, learnability decreased considerably. Multimodality may also help here to decrease distraction. In the special issue we will present the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow’s cars.
In this special issue, we especially invite researchers, scientists, and developers to submit contributions that are original and unpublished and have not been submitted to any other journal, magazine, or conference. We expect at least 30% of novel content. We are soliciting original research related to multimodal smart and interactive media technologies in areas including - but not limited to - the following:
* In-vehicle multimodal interaction concepts
* Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts
* Reducing driver distraction and cognitive load and demand with multimodal interaction
* (pro-active) in-car personal assistant systems
* Driver assistance systems
* Information access (search, browsing etc) in the car
* Interfaces for navigation
* Text input and output while driving
* Biometrics and physiological sensors as a user interface component
* Multimodal affective intelligent interfaces
* Multimodal automotive user-interface frameworks and toolkits
* Naturalistic/field studies of multimodal automotive user interfaces
* Multimodal automotive user-interface standards
* Detecting and estimating user intentions employing multiple modalities

Guest Editors
=============
Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany
Phil Cohen, Voicebox, USA
Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany

Submission Instructions
=======================

1-page abstract submission: 05.02.2017
Invitation for full submission: 15.03.2018
Full Submission: 28.04.2018
Notification about acceptance: 15.06.2018
Final article submission: 15.07.2018
Tentative Publication: ~ 09/2018

Companion website: https://sites.google.com/view/multimodalautomotive/

Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) and to submit manuscripts at the following link: https://easychair.org/conferences/?conf=mmautomotive2018.

Related Resources

ACHI 2025   The Eighteenth International Conference on Advances in Computer-Human Interactions
HUCAPP 2025   9th International Conference on Human Computer Interaction Theory and Applications
21st AIAI 2025   21st (AIAI) Artificial Intelligence Applications and Innovations
CETA--EI 2025   2025 4th International Conference on Computer Engineering, Technologies and Applications (CETA 2025)
25th EANN/EAAAI 2025   25th (EANN/EAAAI) Engineering Applications and Advances of of Artificial Intelligence
IMCOM 2025   19th International Conference on Ubiquitous Information Management and Communication
IEEE Big Data - MMAI 2024   IEEE Big Data 2024 Workshop on Multimodal AI
COPA 2025   14th Symposium on Conformal and Probabilistic Prediction with Applications
CLNLP 2025   2025 2nd International Conference on Computational Linguistics and Natural Language Processing
OOPSLA 2025 Round 2 2025   Conference on Object-Oriented Programming Systems, Languages, and Applications (Round 2)