Introduction
Lecture
1
Introduction
In this lecture we give an introductory example for learning problem.
What we will learn:
learning rectangles.
approximation error.
union bound.
Statistical Learning Theory
Lecture
2
The PAC Model
In this lecture we introduce the Probably Approximately Correct (PAC) Model for classification.
What we will learn:
General Setup
Expected error
The PAC Model
Half spaces
Empirical Risk Minimization
Agnostic vc. Realizable model
Statistical Learning Theory
Lecture
3
VC Dimension
In this lecture we introduce the notion of VC dimension and discuss the fundamental theorem of learning
What we will learn:
Sample complexity lower bounds
VC dimension
VC dimension of rectangles.
VC dimension of halfspaces
The fundamental theorem of learning
proper vs. improper learning
Statistical Learning Theory
Lecture
4
Nerual Networks
In this lecture we discuss the class of neural network in the lens of the PAC model
What we will learn:
The class of neural network.
Expressivity of NN.
Compositional properties of VC dimension
Computational Learning Theory
Lecture
5
Learning Halfspaces
In this lecture we begin to discuss the algorithmic aspects of learning and review learning algorithms for half spaces
What we will learn:
Linear programming
Perceptron
Kernel trick and kernelized perceptron
Computational Learning Theory
Lecture
6
Hardness of Learning
In this lecture we reivew over computational lower bounds for learning
What we will learn:
Class of DNF and hardness of proper learning.
Hardness of (improper) binary classification.
Computational Learning Theory
Lecture
7
Boosting
In this lecture we discuss the algorithmic tool of Boosting, and the reduction from strong learning to weak learning
What we will learn:
Weak learnability.
Expert advice model
Multiplicative Weight algorithm
application to boosting.
Stochastic Convex Optimization
Lecture
8
Learning with Convex Functions
In this lecture we shift from binary learning to real-valued functions learning. Discussing convex functions.
What we will learn:
Convexity
Convex functions
Convex sets.
Convex learning problems
Linear regression
Support Vector Machines.
Stochastic Convex Optimization
Lecture
9
Generalization bounds (real-valued losses)
In this lecture we generalize our techniques to prove generalization bounds to real-valued losses instead of binary losses.
What we will learn:
Covering numbers
Rademacher Complexity.
properties of Rademacher complexity
learning norm-bounded linear models.
Stochastic Convex Optimization
Lecture
10
Optimization Algorithms
In this lecture we will analyze two optimization algorithms for learning: GD and SGD
What we will learn:
General Setup for convex optimization
zero/first order oracle.
Linear projections.
Lipschitness.
Gradient Descent (GD) algorithm.
Stochastic Gradient Descent (SGD) algorithm.
Stochastic Convex Optimization
Lecture
11
Backpropogation algorithm
In this lecture we will discuss the application of gradient descent methods for non-convex optimization (such as neural networks).
What we will learn:
Backpropogation algorithm
Stochastic Convex Optimization
Lecture
12
Kernel Methods
In this lecture we revisit kernel methods and show how they can be used in the framework of convex problems.
What we will learn:
Reproducing Kernel Hilbert Spaces.
Kernelized SGD
Online Learning
Lecture
13
Online Learning
In this lecture we introduce the model of online learning
What we will learn:
Problem Setup.
Examples
Prediction from expert advice.
online shortest path.
portfolio selection.
Online Gradient Descent (OGD) algorithm.
Online Learning
Lecture
14
Partial Feedback
In this lecture we extend the online learning model to the setting of partial feedback
What we will learn:
The Multi-armed bandit problem.
Exp3 algorithm.