Wei, R., Garcia, A., McDonald, A. et al. (3 more authors) (2025) Learning an Active Inference Model of Driver Perception and Control: Application to Vehicle Car-Following. IEEE Transactions on Intelligent Transportation Systems, 26 (7). 9475 -9490. ISSN 1524-9050
Abstract
In this paper we introduce a general estimation methodology for learning a model of human perception and control in a sensorimotor control task based upon a finite set of demonstrations. The model’s structure consists of (i) the agent’s internal representation of how the environment and associated observations evolve as a result of control actions and (ii) the agent’s preferences over observable outcomes. We consider a model’s structure specification consistent with active inference, a theory of human perception and behavior from cognitive science. According to active inference, the agent acts upon the world so as to minimize surprise defined as a measure of the extent to which an agent’s current sensory observations differ from its preferred sensory observations. We propose a bi-level optimization approach to estimation which relies on a structural assumption on prior distributions that parameterize the statistical accuracy of the human agent’s model of the environment. To illustrate the proposed methodology, we present the estimation of a model for car-following behavior based upon a naturalistic dataset. Overall, the results indicate that learning active inference models of human perception and control from data is a promising alternative to closed-box models of driving.
Metadata
| Item Type: | Article |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | This is an author produced version of an article published in IEEE Transactions on Intelligent Transportation Systems, made available under the terms of the Creative Commons Attribution License (CC-BY), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. |
| Keywords: | Human perception and action, partially observable Markov decision process, active inference, inverse reinforcement learning. |
| Dates: |
|
| Institution: | The University of Leeds |
| Academic Units: | The University of Leeds > Faculty of Environment (Leeds) > Institute for Transport Studies (Leeds) |
| Depositing User: | Symplectic Publications |
| Date Deposited: | 04 Jul 2025 09:42 |
| Last Modified: | 04 Jul 2025 09:42 |
| Status: | Published |
| Publisher: | Institute of Electrical and Electronics Engineers (IEEE) |
| Identification Number: | 10.1109/tits.2025.3574552 |
| Related URLs: | |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:228571 |

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)