Martinez-Hernandez, U., Rubio-Solis, A. and Prescott, T.J. orcid.org/0000-0003-4927-5390 (2020) Learning from sensory predictions for autonomous and adaptive exploration of object shape with a tactile robot. Neurocomputing, 382. pp. 127-139. ISSN 0925-2312
Abstract
Humans use information from sensory predictions, together with current observations, for the optimal exploration and recognition of their surrounding environment. In this work, two novel adaptive perception strategies are proposed for accurate and fast exploration of object shape with a robotic tactile sensor. These strategies called (1) adaptive weighted prior and (2) adaptive weighted posterior, combine tactile sensory predictions and current sensor observations to autonomously adapt the accuracy and speed of active Bayesian perception in object exploration tasks. Sensory predictions, obtained from a forward model, use a novel Predicted Information Gain method. These predictions are used by the tactile robot to analyse ‘what would have happened’ if certain decisions ‘would have been made’ at previous decision times. The accuracy of predictions is evaluated and controlled by a confidence parameter, to ensure that the adaptive perception strategies rely more on predictions when they are accurate, and more on current sensory observations otherwise. This work is systematically validated with the recognition of angle and position data extracted from the exploration of object shape, using a biomimetic tactile sensor and a robotic platform. The exploration task implements the contour following procedure used by humans to extract object shape with the sense of touch. The validation process is performed with the adaptive weighted strategies and active perception alone. The adaptive approach achieved higher angle accuracy (2.8 deg) over active perception (5 deg). The position accuracy was similar for all perception methods (0.18 mm). The reaction time or number of tactile contacts, needed by the tactile robot to make a decision, was improved by the adaptive perception (1 tap) over active perception (5 taps). The results show that the adaptive perception strategies can enable future robots to adapt their performance, while improving the trade-off between accuracy and reaction time, for tactile exploration, interaction and recognition tasks.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2019 Elsevier B.V. This is an author produced version of a paper subsequently published in Neurocomputing. Uploaded in accordance with the publisher's self-archiving policy. Article available under the terms of the CC-BY-NC-ND licence (https://creativecommons.org/licenses/by-nc-nd/4.0/). |
Keywords: | Active and adaptive perception; Sensorimotor control; Autonomous tactile exploration; Bayesian inference |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) |
Funding Information: | Funder Grant number EUROPEAN COMMISSION - FP6/FP7 EFAA - 270490 EUROPEAN COMMISSION - HORIZON 2020 785907 |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 08 Jan 2020 13:26 |
Last Modified: | 19 Oct 2021 12:49 |
Status: | Published |
Publisher: | Elsevier BV |
Refereed: | Yes |
Identification Number: | 10.1016/j.neucom.2019.10.114 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:155144 |