Search for contacts, projects,
courses and publications

Scientific Learning


Favino M.

Course director

Kothari H.

Course director

Malaspina A.



 Analysing and quantifying the approximation properties of neural networks is of paramount importance for the design of efficient architectures. In this course, we will first present approximation error estimates for neural networks and will discuss their relation to the underlying architectures. This will be done in the context of differential equations and inverse problems, which will provide our test cases. We then will connect these insights to the training of neural networks. Here, we will present multi-scale and decomposition methods for training, which allow for the efficient and parallel training of neural networks. 


Obtain knowledge on: mathematical description of neural networks; central concepts and ideas of approximation theory for neural networks; fundamental properties of different first and second order optimization techniques for training neural networks; parallel methods and multiscale methods for training neural networks; solving differential equations with neural networks (PINNs)


Teaching mode

In presence

Learning methods

Lecture, reading, self study, hands-on implementation, discussion, tutorial, written weekly assignments.

Examination information

There will be a midterm, either as larger project-like assignment or as an written exam. The final exam will be written. The written weekly assignments will also count for the final grade.