Theory and Practice of Reinforcement Learning
Reinforcement Learning (RL) is closely related to how animals and humans act and learn. Without a teacher, solely from occasional real-valued pain and pleasure signals, RL agents must find out how to interact with a dynamic environment to maximize their future expected reward. The traditional approach to RL makes strong assumptions about the environment, such as the Markov assumption: the current input of the agent tells it all it needs to know about the environment. This is often unrealistic. If we want to narrow the gap between learning abilities of humans and machines, then we will have to study how to learn general algorithms instead of reactive mappings. This area seems destined to become central to machine learning (e.g., robotics) and artificial intelligence in general. In previous work we already devised general algorithms that are applicable in non-Markovian settings. In the current project we will further extend and apply our recent state-of-the-art results on practical and theoretical aspects of general reinforcement learners and search / optimization algorithms. We will apply our methods to the most difficult RL benchmark problems, and augment the present benchmark lists by new hard problems for partially observable environments where existing RL methods have great difficulties.