We introduce HammerDrive, a novel architecture for task-aware visual attention prediction in driving. The proposed architecture is learnable from data and can reliably infer the current focus of attention of the driver in real-time, while only requiring limited and easy-to-access telemetry data from the vehicle. We build the proposed architecture on two core concepts: 1) driving can be modeled as a collection of sub-tasks (maneuvers), and 2) each sub-task affects the way a driver allocates visual attention resources, i.e., their eye gaze fixation. HammerDrive comprises two networks: a hierarchical monitoring network of forward-inverse model pairs for sub-task recognition and an ensemble network of task-dependent convolutional neural network modules for visual attention modeling. We assess the ability of HammerDrive to infer driver visual attention on data we collected from 20 experienced drivers in a virtual reality-based driving simulator experiment. We evaluate the accuracy of our monitoring network for sub-task recognition and show that it is an effective and light-weight network for reliable real-time tracking of driving maneuvers with above 90% accuracy. Our results show that HammerDrive outperforms a comparable state-of-the-art deep learning model for visual attention prediction on numerous metrics with ~13% improvement for both Kullback-Leibler divergence and similarity, and demonstrate that task-awareness is beneficial for driver visual attention prediction.
HammerDrive: A Task-Aware Driving Visual Attention Model
IEEE Transactions on Intelligent Transportation Systems ; 23 , 6 ; 5573-5585
2022-06-01
1903167 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Visual Attention While Performing Driving and Driving-Related Tasks
British Library Conference Proceedings | 1997
|VISUAL ATTENTION WHILE PERFORMING DRIVING AND DRIVING-RELATED TASKS
British Library Conference Proceedings | 1997
|Online learning of task-driven object-based visual attention control
British Library Online Contents | 2010
|Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry
Deutsches Zentrum für Luft- und Raumfahrt (DLR) | 2020
|Optical Flow Fields and Visual Attention in Car Driving
British Library Conference Proceedings | 2003
|