This article offers a brief overview of multimodal (speech, touch, gaze, etc.) input theory as it pertains to common invehicle tasks and devices. After a brief introduction, we walk through a sample multimodal interaction, detailing the steps involved and how information necessary to the interaction can be obtained by combining input modes in various ways. We also discuss how contemporary in-vehicle systems take advantage of multimodality (or fail to do so), and how the capabilities of such systems might be broadened in the future via clever multimodal input mechanisms.
Situation-Aware, User-Centric Multimodality for Automotive
AmE 2011 - Automotive meets Electronics - Beiträge der 2. GMM-Fachtagung ; 2011 ; Dortmund, Germany
2011-01-01
4 pages
Conference paper
Electronic Resource
English
Situation-Aware, User-Centric Multimodality for Automotive
Tema Archive | 2011
|Transportation Research Record | 2007
|Towards information centric automotive system architecture
Automotive engineering | 2002
|Towards Information Centric Automotive System Architectures
British Library Conference Proceedings | 2002
|