Prescott, T.J. and Mayhew, J.E.W. (1991) Obstacle Avoidance through Reinforcement Learning. In: Moody, J.E., Hanson, S.J. and Lippmann, R., (eds.) NIPS. NIPS 1991, December 2-5, 1991, Denver, CO. Morgan Kaufmann , pp. 523-530. ISBN 1-55860-222-4
Abstract
A method is described for generating plan-like. reflexive. obstacle avoidance behaviour in a mobile robot. The experiments reported here use a simulated vehicle with a primitive range sensor. Avoidance behaviour is encoded as a set of continuous functions of the perceptual input space. These functions are stored using CMACs and trained by a variant of Barto and Sutton's adaptive critic algorithm. As the vehicle explores its surroundings it adapts its responses to sensory stimuli so as to minimise the negative reinforcement arising from collisions. Strategies for local navigation are therefore acquired in an explicitly goal-driven fashion. The resulting trajectories form elegant collisionfree paths through the environment.
Metadata
Item Type: | Proceedings Paper |
---|---|
Authors/Creators: |
|
Editors: |
|
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Science (Sheffield) > Department of Psychology (Sheffield) |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 15 Nov 2016 13:27 |
Last Modified: | 15 Nov 2016 13:28 |
Published Version: | http://papers.nips.cc/paper/452-obstacle-avoidance... |
Status: | Published |
Publisher: | Morgan Kaufmann |
Refereed: | Yes |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:107025 |