Schulz, E, Konstantinidis, E orcid.org/0000-0002-4782-0749 and Speekenbrink, M (2018) Putting bandits into context: How function learning supports decision making. Journal of Experimental Psychology: Learning Memory and Cognition, 44 (6). pp. 927-943. ISSN 0278-7393
Abstract
The authors introduce the contextual multi-armed bandit task as a framework to investigate learning and decision making in uncertain environments. In this novel paradigm, participants repeatedly choose between multiple options in order to maximize their rewards. The options are described by a number of contextual features which are predictive of the rewards through initially unknown functions. From their experience with choosing options and observing the consequences of their decisions, participants can learn about the functional relation between contexts and rewards and improve their decision strategy over time. In three experiments, the authors explore participants’ behavior in such learning environments. They predict participants’ behavior by context-blind (mean-tracking, Kalman filter) and contextual (Gaussian process and linear regression) learning approaches combined with different choice strategies. Participants are mostly able to learn about the context-reward functions and their behavior is best described by a Gaussian process learning strategy which generalizes previous experience to similar instances. In a relatively simple task with binary features, they seem to combine this learning with a probability of improvement decision strategy which focuses on alternatives that are expected to lead to an improvement upon a current favorite option. In a task with continuous features that are linearly related to the rewards, participants seem to more explicitly balance exploration and exploitation. Finally, in a difficult learning environment where the relation between features and rewards is nonlinear, some participants are again well-described by a Gaussian process learning strategy, whereas others revert to context-blind strategies.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2017, American Psychological Association. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author's permission. The final article is available, upon publication, at: https://doi.org/10.1037/xlm0000463. Uploaded in accordance with the publisher's self-archiving policy. |
Keywords: | Function Learning; Decision Making; Gaussian Process; Multi-Armed Bandits; Reinforcement Learning |
Dates: |
|
Institution: | The University of Leeds |
Academic Units: | The University of Leeds > Faculty of Business (Leeds) > Management Division (LUBS) (Leeds) > Management Division Decision Research (LUBS) |
Depositing User: | Symplectic Publications |
Date Deposited: | 15 Jan 2018 13:49 |
Last Modified: | 16 Jul 2018 14:35 |
Status: | Published |
Publisher: | American Psychological Association |
Identification Number: | 10.1037/xlm0000463 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:126187 |