Understanding and predicting human driving behavior play an important role in the development of intelligent vehicle systems, particularly for Advanced Driver Assistance System (ADAS) to estimate dangerous operations and take appropriate actions. However, well-predicted lane-changing (LC) behavior is still challenging on account of the complexity and uncertainty of traffic status, and labeled data are required. To address this problem, we propose a novel framework, denoted as LCNet, for lane-changing behavior prediction via joint learning of the front view video images and driver physiological signals. Firstly, with a temporal consistency module, both labeled and unlabeled video frames can be utilized in the training phase, while no extra computation is required during inference. Secondly, a new penalty term is introduced for learning sequential physiological signals, which is sensitive to local continuity property. Finally, a new loss function is designed for LCNet to learn co-occurrence features from the video scene-optical flow branch and physiology branch efficiently. Moreover, the experiments are conducted on a real-world driving data set. The experimental results demonstrate that the LCNet can learn the underlying features of upcoming lane-changing behavior and significantly outperform the other advanced models.
Joint learning of video images and physiological signals for lane-changing behavior prediction
Transportmetrica A: Transport Science ; 18 , 3 ; 1234-1253
2022-12-02
20 pages
Article (Journal)
Electronic Resource
Unknown
Robust Prediction of Lane Departure Based on Driver Physiological Signals
SAE Technical Papers | 2016
|Robust Prediction of Lane Departure Based on Driver Physiological Signals
British Library Conference Proceedings | 2016
|Characterizing lane changing behavior and identifying extreme lane changing traits
Taylor & Francis Verlag | 2023
|