Search for contacts, projects,
courses and publications

Advanced Cooperative Neuro-evolution for Unsupervised Learning and Autonomous Control



Gomez F.



There are many real-world problems where a system must be controlled by repeatedly measuring its state and selecting the best action from a set of possible choices. Because the system can be very complex (i.e. non-linear, high-dimensional, noisy, unstable, etc.) programming a solution directly is often infeasible since it is difficult to know the effect of each action in advance. The ?eld of Reinforcement Learning has contributed many methods that, in principle, can solve these types of problems. However, in practice, they have not scaled well to large state spaces or partially observable environments. This is a serious problem because the real world is continuous and arti?cial agents, like natural organisms, are necessarily constrained in their ability to fully perceive their environment. More recently, methods for evolving arti?cial neural networks or neuroevolution (NE) have shown promising results on continuous, partially observable tasks. While these results are encouraging, NE to date has only been applied successfully to control tasks with a limited number of inputs (typically less that 100). However, to solve sophisticated tasks in real-world domains such autonomous robotics, much richer sources of sensory information such as vision are required. To scale to these conditions, this project intends to achieve three interacting goals: 1. Improve our existing algorithms to scale them to much larger input spaces. The tasks we tackle in this proposal use arti?cial vision which will require algorithms capable of e?ciently evolving much larger networks than those currently used. 2. Analyze these algorithms empirically to better understand the mechanisms responsible for the increased search e?ciency observed in cooperative coevolution. This component of our work will play an important role in the development of our new algorithms 3. Apply our existing and new algorithms to: (a) unsupervised learning: neuroevolution is almost always applied to reinforcement learning tasks. Here we will instead apply it the very general task of redundancy reduction using the principle of predictability minimization. The idea is to evolve statistically independent codes for complex input spaces that can be used directly (e.g. for compression) or as a powerful preprocessor for control tasks. (b) control: we propose two robotics tasks (in simulation) that will use vision and the redundancy reduction preprocessing: Dual-Arm control where two robotic arms must cooperate to solve a task, and an autonomous mobile robot task where the robot, equipped with an arm, must locate, go toward, and grasp a target object.

Additional information

Start date
End date
24 Months
Funding sources
Swiss National Science Foundation / Project Funding / Mathematics, Natural and Engineering Sciences (Division II)