Top-down bottom-up visual saliency for mobile robots using deep neural networks and task-independent feature maps

Jaramillo-Avila, U., Hartwell, A., Gurney, K. orcid.org/0000-0003-4771-728X et al. (1 more author) (2018) Top-down bottom-up visual saliency for mobile robots using deep neural networks and task-independent feature maps. In: Giuliani, M., Assaf, T. and Giannaccini, M., (eds.) Towards Autonomous Robotic Systems. TAROS 2018, 25-27 Jul 2018, Bristol, UK. Lecture Notes in Computer Science, 10965 . Springer Verlag , pp. 489-490. ISBN 9783319967271

Metadata

Authors/Creators:
Copyright, Publisher and Additional Information: © Springer Nature Switzerland AG 2018. This is an author produced version of a paper subsequently published in Towards Autonomous Robotic Systems (LNCS 10965). Uploaded in accordance with the publisher's self-archiving policy.
Dates:
  • Published (online): 21 July 2018
  • Published: 21 July 2018
Institution: The University of Sheffield
Academic Units: The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Automatic Control and Systems Engineering (Sheffield)
The University of Sheffield > Faculty of Science (Sheffield) > Department of Psychology (Sheffield)
Depositing User: Symplectic Sheffield
Date Deposited: 25 Oct 2018 13:07
Last Modified: 25 Oct 2018 13:07
Published Version: https://doi.org/10.1007/978-3-319-96728-8
Status: Published
Publisher: Springer Verlag
Series Name: Lecture Notes in Computer Science
Refereed: Yes
Identification Number: https://doi.org/10.1007/978-3-319-96728-8

Export

Statistics