Supervised recurrent networks (Recurrent Neural Networks)
People
(Responsible)
(Collaborator)
External participants
Wierstra Daan
(Third-party beneficiary)
Abstract
Supervised artificial recurrent neural networks (RNNs) have adaptive feedback connections that allow them to learn mappings from input sequences to output sequences. In principle, they can implement almost arbitrary sequential, algorithmic behavior. They are biologically more plausible and computationally more powerful than other adaptive models such as Hidden Markov Models and Conditional Markov Random Fields (no continuous internal states), feedforward neural networks and traditional Support Vector Machines (SVMs - no internal states at all). Goal of this project is to further improve and analyze our state-of-the-art algorithms for supervised RNNs, especially bi-directional RNNs, and apply them to challenging sequence learning tasks such as speech processing, sun spot prediction, protein prediction. One focus is on novel types of recurrent SVMs, another on RNNs with information-theoretic objectives and on combinations with statistical approaches.