Supervised recurrent networks 2
People
(Responsible)
Abstract
The human brain is a recurrent neural network (RNN): a network of neurons with feedback connections. It can learn many behaviors/ sequence processing tasks/ algorithms / programs that are not learnable by tradional machine learning methods. This explains the rapidly growing interest in artificial RNNs for technical applications: general computers which can learn algorithms to map input sequences to output sequences, with or without a teacher. They are computationally more powerful and biologically more plausible than other adaptive approaches such as Hidden Markov Models (no continuous internal states), feedforward networks and Support Vector Machines (no internal states at all). Our recent applications include adaptive robotics and control, handwriting recognition, speech recognition, keyword spotting, music composition, attentive vision, protein analysis, stock market prediction, and many other sequence problems. In this project we intend to further improve supervised RNNs, advance the theoretical underpinning of RNNs (e.g., memory capacity), combine gradient-based learning and non-gradient-based methods, and apply the methods to challenging real world tasks such as video processing and protein secondary structure prediction.