Highlights A bottom-up convolutional multi-task network for pedestrian intention prediction. A runtime that is nearly independent of the number of pedestrians. Can be easily extended to perform a wide variety of vision-related tasks.

    Abstract The ability to predict pedestrian behaviour is crucial for road safety, traffic management systems, Advanced Driver Assistance Systems (ADAS), and more broadly autonomous vehicles. We present a vision-based system that simultaneously locates where pedestrians are in the scene, estimates their body pose and predicts their intention to cross the road. Given a single image, our proposed neural network is designed using a bottom-up approach and thus runs at nearly constant time without relying on a pedestrian detector. Our method jointly detects human body poses and predicts their intention in a multitask framework. Experimental results show that the proposed model outperforms the precision scores of the state-of-the-art for the task of intention prediction by approximately 20% while running in real-time (5 fps). The source code is publicly available so that it can be easily integrated into an ADAS or into any traffic light management systems.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Pedestrian intention prediction: A convolutional bottom-up multi-task approach


    Contributors:


    Publication date :

    2021-06-04




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Scene Spatio-Temporal Graph Convolutional Network for Pedestrian Intention Estimation

    Naik, Abhilash Y. / Bighashdel, Ariyan / Jancura, Pavol et al. | IEEE | 2022




    Visual Exposes You: Pedestrian Trajectory Prediction Meets Visual Intention

    Zhong, Xian / Yan, Xu / Yang, Zhengwei et al. | IEEE | 2023


    PIT: Progressive Interaction Transformer for Pedestrian Crossing Intention Prediction

    Zhou, Yuchen / Tan, Guang / Zhong, Rui et al. | IEEE | 2023