Search for contacts, projects,
courses and publications

Large-Scale Optimization

Description

Computational science inevitably leads to systems of equations and functions to optimize subject to more equations. The course starts with iterative methods for solving sparse Ax=b and least-squares problems, using the Lanczos process for symmetric systems and the Golub-Kahan bidiagonalization for more general systems. The associated solvers are CG, MINRES, SYMMLQ, LSQR, and LSMR. All methods need minimal storage and are sure to converge. We then study the simplex and reduced-gradient methods for optimization subject to sparse linear constraints (and bounds on the variables), with the LUSOL package providing reliable basis factorization and updates. Interior methods handle bounds differently but still need sparse-matrix methods, as illustrated by PDCO. We then explore augmented Lagrangian methods and SQP methods for handling sparse linear and nonlinear constraints (LANCELOT, MINOS, SQOPT, SNOPT). The course will not be offered in the academic year 2016-17.

 

REFERENCES

  • Jorge Nocedal,  Stephen Wright, Numerical Optimization, Springer Series in Operations Research and Financial Engineering, 2006

 

This course will not be offered in the academic year 2016/17

Additional information

Semester
Fall
Academic year
2016-2017
ECTS
3
Education
Master of Science in Computational Science, Elective course, Lecture, 2nd year

PhD programme of the Faculty of Informatics, Elective course, Lecture, 1st and 2nd year (2 ECTS)