CS480/680: INTRODUCTION TO MACHINE LEARNING, Winter 2024, University of Waterloo

Overview

This course focuses on the introduction of machine leanring. However, we will cover what you are particularly interested in, e.g., technical details of how to train your own ChatGPT.

Pre-requisite:

Format:

Graded Student Work for CS480/680: Homeworks (We do not accept hand-written submission. Typeset using LaTeX is recommended. HW solutions will not be released. Please attend TA's office hour for a solution.): Homework Policy: Completed assignments will be submitted through LEARN. Submit early and often! You must write your solutions independently and individually, and you should always acknowledge any help you get (book, friend, internet, etc.). Using AI to write homeworks is prohibited. We may use tools to detect your submission. Mark appeals should be requested within two weeks of receiving the mark. The appeal could go either ways, so request only if you truly believe something is wrong.

Late Policy: We do NOT accept any late submissions, unless you have a legitimate reason with a formal proof (e.g., hospitalization, family urgency, etc.). The proof date should be within 7 days before your homework deadline. Traveling, being busy with other stuff, internet disconnection, or simply forgetting to submit, are not considered legitimate. Without a proof, your score will be 0 as long as you are late, even for 1min (LEARN submission portal will be closed on time. We DO NOT accept homework submission by email.). With a proof and instructor's approval, you can get a 7-day homework extension. According to the school policy, undergraduate students are allowed to use short-term absence once per term. Please inform the TA head Haochen Sun (h299sun@uwaterloo.ca) and provide a screenshot if you have submitted an application to Quest for a 2-day extension. Failing to do so (e.g., only informing instructors or other TAs) will make your application invalid, and your delayed homework will still be marked as late.

Textbook:

There is no required textbook, but the following fine texts are recommended.

Schedule (tentative)

Date Category Topic Slides Suggested Readings Instructor
Lecture 1 Jan 9 Introduction
Link Deep Learning, Section 1 Hongyang Zhang
Lecture 2 Jan 11 Classic ML Perceptron
Link Patterns, Predictions, and Actions, Page 37 Hongyang Zhang
Jan 16 Classic ML Perceptron - Cont'
Link Patterns, Predictions, and Actions, Page 37 Hongyang Zhang
Lecture 3 Jan 18 Classic ML Linear Regression
Link Probabilistic Machine Learning: An Introduction, Page 363 Hongyang Zhang
Lecture 4 Jan 23 Classic ML
  • Linear Regression - Cont'
  • Logistic Regression
  • Link
  • Link
  • Probabilistic Machine Learning: An Introduction, Page 333 Hongyang Zhang
    Lecture 5 Jan 25 Classic ML Hard-Margin SVM
    Link The Elements of Statistical Learning, Section 12.3 Hongyang Zhang
    Lecture 6 Jan 30 Classic ML Soft-Margin SVM
    Link The Elements of Statistical Learning, Section 12.3 Hongyang Zhang
    Lecture 7 Feb 1 Classic ML
  • Soft-Margin SVM - Cont'
  • Reproducing Kernels
  • Link
  • Link
  • The Elements of Statistical Learning, Section 12.3 Hongyang Zhang
    Lecture 8 Feb 6 Classic ML Gradient Descent
    Link Convex Optimization, Section 9.3 Hongyang Zhang
    Lecture 9 Feb 8 Neural Nets
  • Gradient Descent - Cont'
  • Fully Connected NNs
  • Link
  • Link
  • Deep Learning, Section 6 Hongyang Zhang
    Feb 13 Neural Nets Fully Connected NNs - Cont'
    Link Deep Learning, Section 6 Hongyang Zhang
    Lecture 10 Feb 15 Neural Nets Convolutional NNs
    Link Deep Learning, Section 9 Hongyang Zhang
    Feb 27 Neural Nets Convolutional NNs - Cont'
    Link Deep Learning, Section 9 Hongyang Zhang
    No class Feb 29 - Mid-term Exam
    - Time: 8:30am-9:50am, Location: MC 4040 and MC 4059 Hongyang Zhang
    Lecture 11 March 5 Neural Nets Transformer
    Link
  • “Attention Is All You Need”. Vaswani et al. 2017 link
  • Hongyang Zhang
    Lecture 12 March 7 Modern ML Paradigms Large Language Models
    Link
  • “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. Devlin et al. 2018 link
  • (GPT-1) “Improving Language Understanding by Generative Pre-training”. Radford et al. 2018 link
  • (GPT-2) “Language Models are Unsupervised Multitask Learners”. Radford et al. 2019 link
  • (GPT-3) “Language Models are Few-Shot Learners”. Brown et al. 2020 link
  • (GPT-3.5) “Training Language Models to follow Instructions with Human Feedbacks”. Ouyang et al. 2022 link
  • (GPT-4) “GPT-4 Technical Report”. OpenAI 2023 link
  • (talk by Andrej Karpathy) State of GPT link
  • Hongyang Zhang
    Lecture 13 March 12 Modern ML Paradigms GANs Link
  • Generative Adversarial Networks
  • Yaoliang Yu
    Lecture 14 March 14 Modern ML Paradigms Flows Link
  • Sum-of-Squares Polynomial Flow
  • Yaoliang Yu
    Lecture 15 March 19 Modern ML Paradigms Diffusion Models
    Link
  • Score-Based Generative Modeling through Stochastic Differential Equations
  • Yaoliang Yu
    Lecture 16 March 21 Modern ML Paradigms Self-Supervised Learning
    Link
  • A Simple Framework for Contrastive Learning of Visual Representations
  • Learning Transferable Visual Models From Natural Language Supervision
  • Yaoliang Yu
    Lecture 17 March 26 Trustworthy ML Robustness
    Link
  • (White-box) DeepFool: a simple and accurate method to fool deep neural networks
  • Towards Deep Learning Models Resistant to Adversarial Attacks
  • Theoretically Principled Trade-off between Robustness and Accuracy
  • Yaoliang Yu
    Lecture 18 March 28 Trustworthy ML Poisoning
    Link
  • Exploring the limits of model-targeted indiscriminate data poisoning attacks
  • Yaoliang Yu
    Lecture 19 April 2 Trustworthy ML Differential Privacy
    Link
  • Deep Learning with Differential Privacy
  • Yaoliang Yu
    Lecture 20 April 4 Trustworthy ML Attribution
    Link
  • A Value for n-person Games
  • An Efficient Explanation of Individual Classifications using Game Theory
  • Yaoliang Yu

    Mental Health: If you or anyone you know experiences any academic stress, difficult life events, or feelings like anxiety or depression, we strongly encourage you to seek support.

    On-campus Resources

    Off-campus Resources

    Diversity: It is our intent that students from all diverse backgrounds and perspectives be well served by this course, and that students’ learning needs be addressed both in and out of class. We recognize the immense value of the diversity in identities, perspectives, and contributions that students bring, and the benefit it has on our educational environment. Your suggestions are encouraged and appreciated. Please let us know ways to improve the effectiveness of the course for you personally or for other students or student groups. In particular: