Optimization is of fundamental importance in virtually all branches of science and technology. Optimization methods find their applications in numerous fields, starting from, e.g., machine network flow and ranging over shape optimization in engineering to optimal control problems and machine learning. This course provides an introduction into the most important methods and techniques in discrete and continuous optimization. We will present, analyze, implement, and test -using illustrative examples- methods for discrete and continuous optimization. Particular emphasis will be put on the methodology and the underlying mathematical as well as algorithmic structure. Starting from basic methods as the Simplex method, we will consider various methods in convex as well as non-convex optimization. This will include optimality conditions, the handling of linear and non-linear constraints, and methods such as interior point methods for convex optimization, Newton's method, Trust-Region methods, and optimal control methods.
Obtain knowledge on: central concepts and ideas of optimization; fundamental optimization techniques, gradient based and gradient free; convex and non-convex optimization; constrained and unconstrained optimization; optimality conditions; introduction to optimization in machine learning.
Lecture, reading, self study, hands-on implementation, discussion, tutorial, written weekly assignments.
There will be a midterm, either as larger project-like assignment or as an written exam. The final exam will be written. The written weekly assignments will also count for the final grade.