PhD course: Convex Optimization

Please note this year’s course will be online because of closures mandated by the government in an attempt to slow the spread of COVID-19.

Course number: 02953

Schedule: Three week period, June 4–25, 2020 (note that June 5 is a holiday)

Points (ECTS): 5

Scope and form: Online lectures, pre-recorded lectures, exercises, and mandatory assignments.

Type of assessment: Evaluation of assignments/report(s).

Evaluation: pass/fail, internal examiner

Recommended prerequisites: Coursework in linear algebra (e.g. 01005) and numerical algorithms (e.g. 02601), introductory-level coursework in optimization (e.g., 02610), a certain degree of mathematical maturity, and proficiency in a high-level programming language such as MATLAB, Python, or Julia.

General course objectives

The aim of the course is to provide students with a general overview of convex optimization theory, its applications, and computational methods for large-scale optimization. The students will learn how to recognize convex optimization problems and how to solve these numerically using either an existing software library or by deriving/implementing a suitable method that exploits problem structure. As part of the course, the students will work on a project which aims to provide students with the opportunity to put theory to work in a practical and application-oriented context.

Learning objectives

A student who has met the objectives of the course will be able to:

  • recognize and characterize convex functions and sets
  • explain/characterize the subdifferential of a convex function
  • describe basic concepts of convex analysis
  • derive the Lagrange dual of a convex optimization problem
  • recognize and formulate conic constraints
  • derive a convex relaxation of nonconvex quadratic problems
  • implement a first-order method for a large-scale optimization problem with structure
  • evaluate the computational performance of an optimization algorithm

Content

  • Convex analysis (convex sets and functions, convex conjugate, duality, dual norms, composition rules, subgradient calculus)
  • Conic optimization (linear optimization, second-order cone optimization, semidefinite optimization)
  • First-order methods for smooth and nonsmooth optimization (proximal gradient methods, acceleration)

Course literature

The following two textbooks (both available online) are used:

  • S. Boyd and L. Vandenberghe: “Convex Optimization”, Cambridge University Press, 2003.
  • A. Ben-Tal and A. Nemirovski: “Lectures on Modern Convex Optimization”, lecture notes, 2013.

In addition to the textbooks, the following papers serve as useful references:

  • S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein: “Distributed optimization and statistical learning via the alternating direction method of multipliers”, Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
  • N. Parikh and S. Boyd: “Proximal Algorithms”, Foundations and Trends in Optimization, 1(3):123-231, 2014.
  • E. K. Ryu and S. Boyd: “Primer on Monotone Operator Methods”, Appl. Comput. Math., 15(1):3-43, 2016.
  • A. Beck & M. Teboulle: “A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems”, SIAM J. Imaging Sci., 2(1), 183–202, 2009.

Course responsible

Associate Professor Martin S. Andersen, DTU Compute

Registration

To register for this course, please go to the Study Planner to enroll in 02953 no later than May 20, 2020. (Need help? How to register for courses in the Study Planner.)

To register as a guest PhD student, visit this page where you will find additional information for guest students, including a course registration form.

If you have any questions, please send an email to Martin S. Andersen mskan@dtu.dk.