Hold on just a sec...
3 credits
Fall 2026 Lecture Upper DivisionConvex optimization algorithms using modern large-scale algorithms for convex optimization, with a heavy emphasis on analysis including monotone operator, fixed point iteration and duality in splitting methods. The course will cover and focus on the following three parts: smooth optimization algorithms, nonsmooth convex optimization algorithms, and stochastic and randomized algorithms. Permission of department required. Prerequisites: MA 51100 and MA 50400.
Learning Outcomes1Learn standard convergence results for gradient descent method and Newton's method and their generalizations for smooth problems.
2Learn convergence results for subgradient method and proximal point method for nonsmooth convex problems.
3Learn about popular splitting methods and the proof of their convergence by using monotone operators and fixed-point iteration.
4Learn about convergence of stochastic algorithms such as randomized coordinate descent, stochastic gradient descent and Langevin dynamics.