Most visual odometry (VO) and visual simultaneous localization and mapping (VSLAM) systems rely heavily on robust keypoint detection and matching. With regards to images taken in the underwater environment, phenomena like shallow water caustics and/or dynamic objects like fishes can lead to the detection and matching of unreliable (unsuitable) keypoints within the visual motion estimation pipeline. We propose a plug-and-play keypoint rejection system that rejects keypoints unsuitable for tracking in order to obtain a robust visual ego-motion estimation. A convolutional neural network is trained in a supervised manner, with image patches having a detected keypoint in its center as input and the probability of such a keypoint suitable for tracking and mapping as output. We provide experimental evidence that the system prevents to track unsuitable keypoints in a state-of-the-art VSLAM system. In addition we evaluated several strategies aimed at increasing the inference speed of the network for real-time operations. ; publishedVersion
Deep learning based keypoint rejection system for underwater visual ego-motion estimation
2020-01-01
cristin:1831152
IFAC-PapersOnLine
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
DDC: | 629 |
SELF-SUPERVISED 3D KEYPOINT LEARNING FOR EGO-MOTION ESTIMATION
Europäisches Patentamt | 2021
|Self-supervised 3D keypoint learning for ego-motion estimation
Europäisches Patentamt | 2024
|SELF-SUPERVISED 3D KEYPOINT LEARNING FOR MONOCULAR VISUAL ODOMETRY
Europäisches Patentamt | 2021
|Visual Object Categorization with New Keypoint-based AdaBoost Features
British Library Conference Proceedings | 2009
|