Foundations of Machine Learning


Basic Information

Lectures:Wednesday, 16:00 to 18:00, Location : Room 024, E1 4, the MPI-INF building
Coordinators:Yonggang Jiang, Kurt Mehlhorn, Adam Polak, Roohani Sharma, Hans Simon, Shreyas Srinivas
First lecture:April 12
Exam:No exam
Registration:Please join the mailing list :
LSF Registration deadline will be announced on April 26th
Prerequisites:Basic mathematical fluency is important. You must be comfortable with writing proofs. Being fluent in basic calculus, linear algebra, and probability is very useful. The book's appendix covers some of the essential ideas. 


Course Mechanics : This course is organised as a reading group which introduces students to the theoretical foundations of machine learning through reading and discussion of the book "Foundations of Machine Learning" by Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. The course will be divided into two parts: lectures and student-led presentations.

During the first three weeks, the coordinators of the course will give lectures based on the first four chapters of the book. The lectures will cover the PAC Learning model, notions like VC-Dimension and Rademacher complexity, and Regularisation (Chapters 1-4).

In the following weeks, the course will focus on student-led presentations. Each week, we will try to cover one chapter from chapters 5, 6, 7, 8, 9, 10, 11, and 15. Two students, selected at random will present each chapter, with some chapters potentially presented by the coordinators depending on the number of enrolled students. Prior to the lecture, students are required to read the assigned chapter. If you face difficulties with understanding the material for the reading material prescribed for the upcoming session, please contact one or more of the organisers as soon as possible. Asking questions on the course mailing list is also encouraged. 

What You Should Do : Please read the material and be prepared to give a whiteboard presentation on the topics for the session for upto 45 minutes. A typical presentation will motivate some definitions, describe the key results we seek to prove, and explain how to prove them, and then describe the key takeaways. Providing a conceptual overview is encouraged. From the first week, the chapters to be read for the next session will be announced in class. We don't expect very polished presentations.  Further, you may assume that your audience is familiar with the contents and also offer you help if you get stuck somewhere. We are after all learning the topic together. The key thing is that you put in your best effort at understanding the content, and either discuss it amongst yourselves or ask one of the coordinators if something is unclear. 

What this course is not : This is not a course where we learn how to implement machine learning algorithms. In this sense, it is different from the introductory courses on machine learning. Here, we focus on mathematical models of learning algorithms, and obtaining provable guarantees on their predictive power and complexity measures, typically sample complexity. We do not cover more recent advances in the theoretical understanding of neural nets and deep learning.

Other related courses on campus: ML is a huge topic and we cannot possibly hope to be comprehensive and cover every aspect of it. Our focus is limited to the computational and in particular theoretical side. If you would like to learn about other topics and aspects, there are a number of other courses being offered on campus this semester. The following is an almost certainly incomplete list:


LectureLecturerTopicReference (see below)Location
April 12Adam PolakIntroduction and PAC LearningChapters 1 and 2 (mostly 2)E1 4, 024
April 19Adam Polak, Hans SimonPAC Learning for Finite and Infinite Hypothesis ClassesChapters 2 and 3E1 4, 024
April 26Hans SimonPAC learning for Infinite Hypothesis Classes, Intro to hypothesis selection, and ERMChapters 3.3, 3.4, 4.1, and 4.2E1 4, 024
May 3Shreyas SrinivasModel SelectionChapter 4E1 4, 024
May 10Georgi Vitanov, Yonggang JiangSVMChapter 5E1 5, 005
May 17Georgi Vitanov, Hans SimonKernel MethodsChapter 6E1 4, 024
May 24Kurt MehlhornBoostingChapter 7E1 4, 024
May 31Adam PolakOnline LearningChapter 8E1 4, 024
Jun 7Adam Polak, Yonggang JiangOnline Learning, RankingChapter 8 and 10E1 4, 024
June 14Georgi VitanovRegressionChapter 11

E1 4,

June 21Hans SimonAlgorithmic StabilityChapter 14E1 4, 024
June 28Georgi VitanovDimensionality ReductionChapter 15E1 4, 024
July 5Shreyas SrinivasLearning Automata and LanguagesChapter 16E1 4, 024
July 19Yonggang JiangReinforcement LearningChapter 17E1 4, 024