Show simple item record

dc.contributor.authorGardner, M.P.H.
dc.contributor.authorSchoenbaum, G.
dc.contributor.authorGershman, S.J.
dc.date.accessioned2019-04-05T13:55:14Z
dc.date.available2019-04-05T13:55:14Z
dc.date.issued2018
dc.identifier.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85056957403&doi=10.1098%2frspb.2018.1645&partnerID=40&md5=790ab4b6e8a27a7e9332e8e2fe3c7748
dc.identifier.urihttp://hdl.handle.net/10713/8838
dc.description.abstractMidbrain dopamine neurons are commonly thought to report a reward prediction error (RPE), as hypothesized by reinforcement learning (RL) theory. While this theory has been highly successful, several lines of evidence suggest that dopamine activity also encodes sensory prediction errors unrelated to reward. Here, we develop a new theory of dopamine function that embraces a broader conceptualization of prediction errors. By signalling errors in both sensory and reward predictions, dopamine supports a form of RL that lies between model-based and model-free algorithms. This account remains consistent with current canon regarding the correspondence between dopamine transients and RPEs, while also accounting for new data suggesting a role for these signals in phenomena such as sensory preconditioning and identity unblocking, which ostensibly draw upon knowledge beyond reward predictions. � 2018 The Author(s) Published by the Royal Society. All rights reserved.en_US
dc.description.urihttps://dx.doi.org/10.1098/rspb.2018.1645en_US
dc.language.isoEnglishen_US
dc.publisherRoyal Society Publishingen_US
dc.relation.ispartofProceedings of the Royal Society B: Biological Sciences
dc.subjectReinforcement learningen_US
dc.subjectSuccessor representationen_US
dc.subjectTemporal difference learningen_US
dc.titleRethinking dopamine as generalized prediction erroren_US
dc.typeArticleen_US
dc.identifier.doi10.1098/rspb.2018.1645
dc.identifier.pmid30464063


This item appears in the following Collection(s)

Show simple item record