Computational science inevitably leads to systems of equations and functions to optimize subject to more equations. The course starts with iterative methods for solving sparse Ax=b and least-squares problems, using the Lanczos process for symmetric systems and the Golub-Kahan bidiagonalization for more general systems. The associated solvers are CG, MINRES, SYMMLQ, LSQR, and LSMR. All methods need minimal storage and are sure to converge. We then study the simplex and reduced-gradient methods for optimization subject to sparse linear constraints (and bounds on the variables), with the LUSOL package providing reliable basis factorization and updates. Interior methods handle bounds differently but still need sparse-matrix methods, as illustrated by PDCO. We then explore augmented Lagrangian methods and SQP methods for handling sparse linear and nonlinear constraints (LANCELOT, MINOS, SQOPT, SNOPT).
BIOMichael Saunders holds a PhD in Computer Science from Stanford University. Since 1987 he has been a Professor in Operations Research at Stanford specializing in numerical algorithms for sparse linear systems and large-scale constrained optimization. He is coauthor of the linear equation solvers SYMMLQ, MINRES, LSQR, LSMR, LUSOL, LUMOD, LSRN and the optimization solvers MINOS, LSSOL, NPSOL, QPOPT, SNOPT, SQOPT, PDCO. He is an Honorary Fellow of the Royal Society of New Zealand (2007) and a SIAM Fellow (2013). In 2012 he won the SIAM Linear Algebra Prize and was elected to the Stanford University Invention Hall of Fame.
- Jorge Nocedal, Stephen Wright, Numerical Optimization, Springer Series in Operations Research and Financial Engineering, 2006