Search for contacts, projects,
courses and publications

Supervised recurrent networks (Recurrent Neural Networks)



Schmidhuber J.


González F. S.


External participants

Wierstra Daan

(Third-party beneficiary)


Supervised artificial recurrent neural networks (RNNs) have adaptive feedback connections that allow them to learn mappings from input sequences to output sequences. In principle, they can implement almost arbitrary sequential, algorithmic behavior. They are biologically more plausible and computationally more powerful than other adaptive models such as Hidden Markov Models and Conditional Markov Random Fields (no continuous internal states), feedforward neural networks and traditional Support Vector Machines (SVMs - no internal states at all). Goal of this project is to further improve and analyze our state-of-the-art algorithms for supervised RNNs, especially bi-directional RNNs, and apply them to challenging sequence learning tasks such as speech processing, sun spot prediction, protein prediction. One focus is on novel types of recurrent SVMs, another on RNNs with information-theoretic objectives and on combinations with statistical approaches.

Additional information

Start date
End date
36 Months
Funding sources
Swiss National Science Foundation / Project Funding / Mathematics, Natural and Engineering Sciences (Division II)