A Computational Model of Integration between Reinforcement Learning and Task Monitoring in the Prefrontal Cortex - Inserm - Institut national de la santé et de la recherche médicale Accéder directement au contenu
Communication Dans Un Congrès Année : 2010

A Computational Model of Integration between Reinforcement Learning and Task Monitoring in the Prefrontal Cortex

Résumé

Taking inspiration from neural principles of decision-makingis of particular interest to help improve adaptivity of artificial systems.Research at the crossroads of neuroscience and artificial intelligence in thelast decade has helped understanding how the brain organizes reinforcementlearning (RL) processes (the adaptation of decisions based on feedbackfrom the environment). The current challenge is now to understandhow the brain flexibly regulates parameters of RL such as the explorationrate based on the task structure, which is called meta-learning ([1]: Doya,2002). Here, we propose a computational mechanism of exploration regulationbased on real neurophysiological and behavioral data recorded inmonkey prefrontal cortex during a visuo-motor task involving a clear distinctionbetween exploratory and exploitative actions. We first fit trialby-trial choices made by the monkeys with an analytical reinforcementlearning model. We find that the model which has the highest likelihoodof predicting monkeys' choices reveals different exploration rates at differenttask phases. In addition, the optimized model has a very high learningrate, and a reset of action values associated to a cue used in the task tosignal condition changes. Beyond classical RL mechanisms, these resultssuggest that the monkey brain extracted task regularities to tune learningparameters in a task-appropriate way. We finally use these principles todevelop a neural network model extending a previous cortico-striatal loopmodel. In our prefrontal cortex component, prediction error signals are extractedto produce feedback categorization signals. The latter are used toboost exploration after errors, and to attenuate it during exploitation, ensuringa lock on the currently rewarded choice. This model performs thetask like monkeys, and provides a set of experimental predictions to betested by future neurophysiological recordings.
Fichier principal
Vignette du fichier
khamassi2010proc.pdf (674.31 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inserm-00548868 , version 1 (20-12-2010)

Identifiants

  • HAL Id : inserm-00548868 , version 1

Citer

Mehdi Khamassi, René Quilodran, Pierre Enel, Emmanuel Procyk, Peter Ford Dominey. A Computational Model of Integration between Reinforcement Learning and Task Monitoring in the Prefrontal Cortex: Reinforcement Learning, Task Monitoring and the Prefrontal Cortex. 11th international conference on Simulation of Adaptive Behaviour 2010, Aug 2010, Paris, France. pp.424-434. ⟨inserm-00548868⟩
305 Consultations
394 Téléchargements

Partager

Gmail Facebook X LinkedIn More