Confidence resets reveal hierarchical adaptive learning in humans

Abstract : Hierarchical processing is pervasive in the brain, but its computational significance for learning under uncertainty is disputed. On the one hand, hierarchical models provide an optimal framework and are becoming increasingly popular to study cognition. On the other hand, non-hierarchical (flat) models remain influential and can learn efficiently, even in uncertain and changing environments. Here, we show that previously proposed hallmarks of hierarchical learning, which relied on reports of learned quantities or choices in simple experiments, are insufficient to categorically distinguish hierarchical from flat models. Instead, we present a novel test which leverages a more complex task, whose hierarchical structure allows generalization between different statistics tracked in parallel. We use reports of confidence to quantitatively and qualitatively arbitrate between the two accounts of learning. Our results support the hierarchical learning framework, and demonstrate how confidence can be a useful metric in learning theory.
Document type :
Journal articles
Complete list of metadatas

Cited literature [64 references]  Display  Hide  Download

https://www.hal.inserm.fr/inserm-02145648
Contributor : Myriam Bodescot <>
Submitted on : Monday, June 3, 2019 - 11:03:57 AM
Last modification on : Wednesday, June 5, 2019 - 1:17:42 AM

File

journal.pcbi.1006972.pdf
Publication funded by an institution

Identifiers

Collections

Citation

Micha Heilbron, Florent Meyniel. Confidence resets reveal hierarchical adaptive learning in humans. PLoS Computational Biology, Public Library of Science, 2019, 15 (4), pp.e1006972. ⟨10.1371/journal.pcbi.1006972⟩. ⟨inserm-02145648⟩

Share

Metrics

Record views

39

Files downloads

60