Scandola, M. orcid.org/0000-0003-0853-8975, Cross, E.S. orcid.org/0000-0002-1671-5698, Caruana, N. orcid.org/0000-0002-9676-814X et al. (1 more author) (2023) Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze Observation. International Journal of Social Robotics, 15 (8). pp. 1365-1385. ISSN 1875-4791
Abstract
The future of human–robot collaboration relies on people’s ability to understand and predict robots' actions. The machine-like appearance of robots, as well as contextual information, may influence people’s ability to anticipate the behaviour of robots. We conducted six separate experiments to investigate how spatial cues and task instructions modulate people’s ability to understand what a robot is doing. Participants observed goal-directed and non-goal directed gaze shifts made by human and robot agents, as well as directional cues displayed by a triangle. We report that biasing an observer's attention, by showing just one object an agent can interact with, can improve people’s ability to understand what humanoid robots will do. Crucially, this cue had no impact on people’s ability to predict the upcoming behaviour of the triangle. Moreover, task instructions that focus on the visual and motor consequences of the observed gaze were found to influence mentalising abilities. We suggest that the human-like shape of an agent and its physical capabilities facilitate the prediction of an upcoming action. The reported findings expand current models of gaze perception and may have important implications for human–human and human–robot collaboration.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © The Author(s) 2023. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
Keywords: | Gaze perception; Body perception; Action prediction; Human–robot interaction; Mentalising |
Dates: |
|
Institution: | The University of Leeds |
Academic Units: | The University of Leeds > Faculty of Medicine and Health (Leeds) > School of Psychology (Leeds) |
Depositing User: | Symplectic Publications |
Date Deposited: | 07 Nov 2023 15:10 |
Last Modified: | 07 Nov 2023 15:10 |
Status: | Published |
Publisher: | Springer |
Identification Number: | 10.1007/s12369-022-00962-2 |
Related URLs: | |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:205003 |