Hold on just a sec...
3 credits
Spring 2025 Lecture Upper DivisionThis course provides an integrated view of the key concepts of deep learning (representation learning) methods. This course focuses on teaching principles and methods needed to design and deploy novel deep learning models, emphasizing the relationship between traditional statistical models, causality, invariant theory, and the algorithmic challenges of designing and deploying deep learning models in real-world applications. This course has both a theoretical and coding component. The course assumes familiarity with coding in the language used for state-of-the-art deep learning libraries, linear algebra, probability theory, and statistical machine learning.
Learning Outcomes1Understand statistical Foundations of Deep Learning.
2Understand feedforward Networks.
3Understand stochastic optimization of neural network models.
4Understand Bayesian Neural Networks.
5Understand invariant & Equivariant Representation Learning.
6Understand task-invariant representations.
7Understand meta Learning.
8Understand multi-task Learning.
9Understand transfer Learning.
10Understand implicit generative models (probabilistic models without explicit likelihoods).
11Understand variational Auto-Encoders.
12Understand generative Adversarial Networks.
13Understand stable Diffusion Generative models.
14Understand how to evaluate the performance of neural networks, as well as formulate and test hypotheses.
15Understand how theory and algorithmic elements interact to impact performance.