Hold on just a sec...
3 credits
Fall 2025 LectureAn introduction to modern generative models like diffusion models (e.g., "Stable Diffusion"), variational autoencoders, normalizing flows, and energy-based models, with a focus on derivations from the perspective of statistical learning theory. We build up from the basics, starting with probabilistic graphical models, which provide the framework for many of the ideas in the class. What exactly are generative models? They are a powerful alternative to discriminative models that, when properly specified, estimate their parameters more efficiently and can generate samples from the distribution of their input data, but also can be used (like discriminative models) to infer features or labels from their inputs. However, the generative and inferential faculties typically come at each other's expense. This course will cover five different attempts at finessing this trade-off, and the resulting learning algorithms: exact inference in directed graphical models (EM algorithm); sampling-based methods in undirected (energy-based) models; deterministic approximate inference ("variational" methods, e.g., VAEs); invertible, deterministic models (e.g., ICA, normalizing flows); and adversarial training (GANs).
Learning Outcomes1Identify the trade-offs in inference and generation in generative models.
2Translate a description of data into a probabilistic graphical model.
3Identify the appropriate inference algorithm for a dataset.
4Implement the inference and learning algorithms for common generative models, both classical (Kalman filter, forward backward, EM, particle filter, etc.) and modern (VAE, diffusion model, etc.).
5Present to their peers novel results that use the ideas in this course.