Visual Place Recognition (VPR) excels at providing a good location prior for autonomous vehicles to initialize the map-based visual SLAM system, especially when the environment changes after a long term. Condition change and viewpoint change, which influences features extracted from images, are two of the major challenges in recognizing a visited place. Existing VPR methods focus on developing the robustness of global feature to address them but ignore the benefits that local feature can auxiliarily offer. Therefore, we introduce a novel hierarchical place recognition method with both global and local features deriving from homologous VLAD to improve the VPR performance. Our model is weak supervised by GPS label and we design a fine-tuning strategy with a coupled triplet loss to make the model more suitable for extracting local features. In our proposed hierarchical architecture, we firstly rank the database to get top candidates via global features and then we propose a modified DTW algorithm to re-rank the top candidates via local features. Moreover, greater weights are given to the features in regions of interest and the results show that it makes those special local features more important in re-ranking. Further, experiments on Pittsburgh30k and Tokyo247 benchmarks show that our approach outperforms several existing Vlad-based VPR methods.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Weak Supervised Hierarchical Place Recognition with VLAD-Based Descriptor


    Weitere Titelangaben:

    Sae Technical Papers


    Beteiligte:
    Fang, Kai (Autor:in) / Wang, Yafei (Autor:in) / Li, Zexing (Autor:in)

    Kongress:

    SAE 2022 Intelligent and Connected Vehicles Symposium ; 2022



    Erscheinungsdatum :

    2022-12-22




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Print


    Sprache :

    Englisch




    Weak Supervised Hierarchical Place Recognition with VLAD-Based Descriptor

    Fang, Kai / Li, Zexing / Wang, Yafei | British Library Conference Proceedings | 2022


    Towards optimal VLAD for human action recognition from still images

    Zhang, Lei / Li, Changxi / Peng, Peipei et al. | British Library Online Contents | 2016


    Towards optimal VLAD for human action recognition from still images

    Zhang, Lei / Li, Changxi / Peng, Peipei et al. | British Library Online Contents | 2016


    Pedestrian motion recognition via Conv‐VLAD integrated spatial‐temporal‐relational network

    Shiyu Peng / Tingli Su / Xuebo Jin et al. | DOAJ | 2020

    Freier Zugriff

    Towards optimal VLAD for human action recognition from still images

    Zhang, L. / Li, C. / Peng, P. et al. | British Library Online Contents | 2016