An autonomous navigation scheme based on sequential images is presented for planetary landing in unknown environments. A lander is assumed to be equipped with only an inertial measurement unit and a monocular camera. The emphasis of the paper is on the ability of the proposed navigation method to estimate the lander’s states without any a priori knowledge of the environment or extra sensors. Assuming that the landing surface is in the local level plane, an implicit measurement model is derived from observations of features with unknown three-dimensional positions tracked in sequential images. The derived measurement model is fused with measurements from the inertial measurement unit using an extended Kalman filter. Finally, an observability analysis of the proposed navigation system is performed and yields the closed-form expression of the unobservable directions. Simulation results verify the observability analysis and show that all lander states can be estimated except horizontal position and global rotation about the gravity direction.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Autonomous Navigation Based on Sequential Images for Planetary Landing in Unknown Environments


    Contributors:
    Xu, Chao (author) / Wang, Dayi (author) / Huang, Xiangyu (author)

    Published in:

    Publication date :

    2017-07-18


    Size :

    16 pages




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English