Skip to Main content Skip to Navigation
Journal articles

Learning to represent signals spike by spike

Wieland Brendel 1, 2, 3 Ralph Bourdoukan 2 Pietro Vertechi 1, 2 Christian K Machens 1, * Sophie Denève 2, * 
* Corresponding author
2 Group for Neural Theory [Paris]
LNC2 - Laboratoire de Neurosciences Cognitives & Computationnelles, IEC - Labex Institut d'étude de la cognition
Abstract : Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations.
Document type :
Journal articles
Complete list of metadata

Cited literature [63 references]  Display  Hide  Download
Contributor : Myriam Bodescot Connect in order to contact the contributor
Submitted on : Friday, May 15, 2020 - 12:34:38 PM
Last modification on : Thursday, March 17, 2022 - 10:08:41 AM


Publication funded by an institution




Wieland Brendel, Ralph Bourdoukan, Pietro Vertechi, Christian K Machens, Sophie Denève. Learning to represent signals spike by spike. PLoS Computational Biology, Public Library of Science, 2020, 16 (3), pp.e1007692. ⟨10.1371/journal.pcbi.1007692⟩. ⟨inserm-02588202⟩



Record views


Files downloads