Search for contacts, projects,
courses and publications

State Representation in reward based learning - from spiking neuron models to psychophysics

People

 

Schmidhuber J.

(Responsible)

External participants

Gerstner Wulfram

(Third-party responsible)

Abstract

Reward-based learning encompasses a broad class of algorithms in the field of machine learning that allow to optimize the behavior of an agent (e.g. of a real or simulated robot) so as to maximize the total expected reward. These algorithms describe learning in machines that is reminiscent of learning in animals or humans as studied in animal behavior (e.g. conditioning) or human psychophysics. Learning in humans or animals in turn is thought to be related to changes in synaptic connections between neurons in the brain. Hence the question arises whether models of synaptic plasticity on the level of spiking neurons can be connected to formal `reinforcement' learning models in machine learning and to human psychophysics and animal behavior. This project combines the expertise from two laboratories in computational neuroscience (EPFL-LCN/Wulfram Gerstner and Univ. Berne/Walter Senn) who have both previously worked on spike-based models of synaptic plasticity, with the machine learning expertise of the Schmidhuber group at IDSIA (Lugano) who have a long-standing track record in formal models of reinforcement learning, with the psychophysics laboratory of Michael Herzog (EPFL-LPSY) who has a long tradition in human vision and perceptual learning, and with the rodent behavior expertise of Carmen Sandi (EPFL-BMI).

Additional information

Start date
01.01.2009
End date
31.08.2012
Duration
44 Months
Funding sources
SNSF
External partners
Beneficiario principale Prof. Gerstner, EPFL
Status
Ended
Category
Swiss National Science Foundation / Sinergia