top of page

Intro. to Computational Learning Theory

This is an introductory level course that reviews the most basic models and results in learning theory. The aim of this course is to provide the students with fundamental familiarity with the most well-studied models and most basic proof techniques in the field.

​

What do I need to know?

The course requires familiarity with basic-level algebra and probability theory.

​

Selected topics we will review:

Some of the basics this course will go over include:​

  • The PAC model.

    • Empirical Risk Minimization (ERM) Principle

    • VC Dimension.

    • The fundamental Theorem of Learning. 

  • Algorithms and Limitations.

    • Perceptron.​

    • Kernel Methods.

    • Boosting.

    • Hardness.

  • Stochastic Convex Optimization.

    • The SCO model.​

    • Uniform convergence generalization bounds.

    • First order optimization methods (GD and SGD).

  • Online Learning

    • The OCO mode.​

    • Expert advice, and Multi armed bandit problem.

The provided course includes lectures that were filmed in 2020, and lecture notes that will be updated along the  semester.

The Course is next scheduled at Semester B of 2021/2022

clt

Stochastic Convex Optimization

This is an advanced course in learning theory that aims to map and understand the problem of learning in the special model of stochastic convex optimization (you may find it in the sylabus under "Advanced Topics in Machine Learning").

 

In distinction from other courses on optimization, this course focuses on optimization in the presence of a learning objective. In particular, we will learn to analyze the generalization performance of different optimization methods on to the (unknown) population risk.​

​

After developing the fundamental notions and results needed to discuss convex optimization, the course will start to review the question of generalization in the context of SCO: beginning with the no-fundamental-theorem theorem that states that learning and ERM are distinct problems. We will then continue to more recent developments that show how seemingly comparable optimization algorithms starts to behave totally different when stochastic problems are considered.

We will in turn use these observations to discuss generalization and learning in real-life settings and try to reason about recent attempts to explain generalization by notions such stability, inductive bias, regularization, the geometry of the loss landscape, as well as others.

​

​

What do I need to know?

It is highly recommended to take an introductory course in learning theory before  this course. However, the course will be self contained, and basic level of algebra, and probability theory should suffice.

​

Topics we will review

  • Basic notions in convexity and optimization.

  • Basic optimization methods

    • Cutting plane methods.

    • Gradient Descent.

    • Stochastic Gradient Descent.

  • Uniform-convergence bounds.

    • Covering Numbers.

    • Rademacher Complexity.

  • Stability bounds.

  • Regularization.

  • The superiority of stochastic descent methods.

  • The no-fundamental-theorem

  • Algorithm-dependent lower bounds.

The provided course includes lectures that were filmed in 2020, and lecture notes that will be updated along the  semester.

The Course is next scheduled at Semester A of 2021/2022

sco
bottom of page