Ling, F. orcid.org/0009-0000-1712-2184, Huang, Z. orcid.org/0000-0002-5273-3658 and Prescott, T.J. orcid.org/0000-0003-4927-5390 (2025) Improving the robustness of visual teach‐and‐repeat navigation using drift error correction and event‐based vision for low‐light environments. Advanced Robotics Research. e202500105. ISSN: 2943-9973
Abstract
We present a framework for visual teach-and-repeat (VTR) navigation designed to operate robustly in environments characterized by variable or low light levels. First, we show that navigation accuracy for VTR can be improved by integrating a topological map with a decision-making strategy designed to reduce latencies and trajectory error. Specifically, a local scene descriptor, acquired through deep learning, is coupled with stereo camera imaging and a proportional-integral controller to compensate for inaccuracies in visual matching. This approach facilitates accurate teach-and-repeat navigation with correction for odometry drift with respect to both orientation and along-route error accumulation using only monocular images during route following. Next, we adapt this general approach to operate with an off-the-shelf event-based camera and an event-based local descriptor model. Experiments in a night-time urban environment demonstrate that this event-based system provides improved and robust navigation accuracy in low-light environments when compared with a conventional camera paired with a state-of-the-art RGB-based descriptor model. Overall, high trajectory accuracy is demonstrated for VTR navigation in both indoor and outdoor environments using deep-learned descriptors, whilst the extension to event-based vision extends the capability of VTR navigation to a wider range of challenging environments.
Metadata
| Item Type: | Article |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | © 2025 The Author(s). This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. http://creativecommons.org/licenses/by/4.0/ |
| Keywords: | deep learning for image processing; drift correction; event-based camera; visual teach and repeat navigation |
| Dates: |
|
| Institution: | The University of Sheffield |
| Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) |
| Date Deposited: | 02 Dec 2025 16:15 |
| Last Modified: | 02 Dec 2025 16:15 |
| Status: | Published online |
| Publisher: | Wiley |
| Refereed: | Yes |
| Identification Number: | 10.1002/adrr.202500105 |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:235072 |

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)