Continuous Algorithms

Santosh Vempala

Spring 2019. MW 9:30-10:45am. Howey N210

The design of algorithms is traditionally a discrete endeavor. However, many advances have come from a continuous viewpoint. Typically, a continuous process, deterministic or randomized is designed (or shown) to have desirable properties, such as approaching an optimal solution or a desired distribution, and an algorithm is derived from this by appropriate discretization. In interesting and general settings, the current fastest methods are a consequence of this perspective. We will discuss several examples of algorithms for high-dimensional optimization and sampling, and use them to understand the following concepts in detail.

The course is also being taught on the West coast by Yin Tat Lee.

Prerequisite: basic knowledge of algorithms, probability, linear algebra.

Lecture notes.

Schedule (tentative):

  1. Introduction
    Jan 7. Course overview. Gradient descent
    Jan 9. Langevin dynamics
  2. Elimination
    Jan 14. Ellipsoid/CoG/Cutting Plane for Optimization
    Jan 16. ... for Volume computation
    Jan 21. MLK holiday
    Jan 23. Bolas y Parabolas
  3. Reduction
    Jan 28, 30. Duality (LP duality, SDP duality, ...) and Equivalences (Optimization, Membership, Separation; OPT, Integration->Sampling; Maxflow, Bipartite Matching)
  4. Geometrization I: Optimization
    Feb 4. Newton method
    Feb 6. IPM for LP
    Feb 18. Andre Wibisono: Higher-Order Gradient Methods
  5. Geometrization II: Sampling
    Feb 11. ARC 12.
    Feb 13. Conductance, Ball walk
    Feb 25. Mixing of ball walk, Isoperimetry, Isotropy.
    Feb 27. Hit-and-Run
    Mar 4. Dikin walk
    Mar 6. (Riemannian) HMC
  6. Multiplication
    Mar 11. Simulated Annealing.
    Mar 13. Volume, Integration, Rounding.
    Mar 18,20 (Spring break)
  7. Expansion
    Mar 25. ODE. Collocation Method.
    Mar 27. SDE. Ito's lemma. Localization.
  8. Sparsification
    Apr 1. Regression and subspace embeddings.
  9. Acceleration
    Apr 3. Chebychev polynomials. Accelerated Gradient Descent.
  10. Student presentations
    Apr 8. Shengding (Representing the PSD cone). Cyrus+Majid (Escaping Saddle Points).
    Apr 10. Sam (Hard Sphere Model), Zongchen (Convergence rate of Hamiltonian Monte Carlo).
    Apr 15. Aditi+He (Star-convex Functions), Rohan (Applications of HMC).
    Apr 17. Jiaming (Nonconvex optimization with SGD), Gabriel+James (Stochastic Billiards).
Format: This course is a research seminar. I will give most of the lectures and class participants will give the rest.
A student taking the class for credit is expected to (possibly partnering with another student):
(a) Each week: Proofread lecture notes and suggest improvements OR do one HW problem.
(b) Present in class
(c) Study in more depth the topic of (a) or (b) and prepare an expository note and/or work on a related open problem.