Montréal Machine Learning and Optimization (MTL MLOpt) is a group of researchers living and working in Montréal.

Our research loosely spans topics in machine learning and mathematical optimization. Many of our members are affiliated with the Mila, where we also held our physical meetings (in pre-apocalyptic times). The group includes researchers from the University of Montréal, McGill, Google Brain, Samsung SAIT AI Lab (SAIL) Montreal, Facebook AI Research Montréal (FAIR) and Microsoft Research Montréal.

We hold public and internal meetings. The public meetings are a seminar with guest lecturers from around the world. Our public meetings are open for everyone to attend. Our internal meetings typically comprise of a presentation of a member's latest work, followed by discussion which often leads to productive collaborations.

  • Join our mailing group to receive the latest public meeting announcements.
  • Find the videos from our seminar here.
  • Follow us on Twitter for more updates.

Organizers: Reyhane Askari Hemmat, Arinka Jancarik, Nicolas Le Roux, Gauthier Gidel, Ioannis Mitliagkas

Schedule of talks

Our public talks are typically held on Wednesdays every two weeks. To receive notification, please sign up for our mailing group.

Summer 2021

July 14th Mark Sellke A Universal Law of Robustness via Isoperimetry [paper]
June 30th Ludwig Schmidt Evaluating Machine (Human) Accuracy and Robustness on ImageNet
June 16th Jelena Diakonikolas Structure in Min-Max Optimization (and How to Use It!)
June 2nd Tong Zhang Why Cosine Learning Rate Scheduler Works and How to Improve It

Spring 2021

May 5th Vasilis Syrgkanis Adversarial machine learning and instrumental variables for flexible causal modeling
April 21st Lenka Zdeborova Insights on Gradient-based algorithms in high-dimensional non-convex optimisation
April 7th Aaron Defazio Why are we still using SGD in the 21st century? Adventures in large-scale deep learning
March 24th Sebastien Bubeck A law of robustness for two-layers neural networks

Winter 2021

March 10th Margarida Carvalho Combinatorial optimization for games
February 24th Anastasios Kyrillidis Distributed learning of deep neural networks using independent subnet training
February 10th Lorenzo Rosasco An implicit tour of regularization
January 27th Robert M. Gower New Viewpoints, Variants and Convergence Theory for Stochastic Polyak Stepsizes
January 13th Panayotis Mertikopoulos Dynamics, (min-max) optimization, and games

Fall 2020

December 2nd Nicolas Loizou SGD for Modern Machine Learning: Practical Variants and Convergence Guarantees
November 4th Ashia Wilson Variational Perspectives on Machine Learning: Algorithms, Inference, and Fairness
October 7th Karolina Dziugaite Distribution-dependent generalization bounds for noisy, iterative learning algorithms
September 23rd Aude Genevay Learning with entropy-regularized optimal transport
September 9th Geoffrey Negiar Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization

Summer 2020

August 26th Hanie Sedghi What is being transferred in transfer learning?
August 12th Kamalika Chaudhuri Challenges in Reliable Machine Learning
August 7th Rachel Ward Weighted Optimization: better generalization by smoother interpolation
July 31st Costis Daskalakis The Complexity of Min-Max Optimization
July 3rd Rachael Tappenden Accelerated Gradient Methods with Optimality Certificates [paper]
June 5th Francis Bach On the effectiveness of Richardson Extrapolation in Machine Learning [paper] [video]

Spring 2020

May 22nd Tim Hoheisel Cone-Convexity and Composite Functions [paper]
May 8th Peter Richtarik On Second Order Methods and Randomness [paper] [video]
May 1st Adam Oberman Accelerated stochastic gradient descent: convergence rate and empirical results [paper]
April 17th Mark Schmidt Faster Algorithms for Deep Learning? [video]

People

Professors and research scientists

Person 3

Ioannis Mitliagkas

Université de Montréal

Person 2

Nicolas Le Roux

McGill, Université de Montréal

Person 1

Simon Lacoste-Julien

Université de Montréal, Samsung

Person 4

Adam Oberman

McGill

Person 6

Fabian Pedregosa

Google Brain

Person 6

Damien Scieur

Samsung

Person 5

Courtney Paquette

Google Brain, McGill

Person 4

Gauthier Gidel

Université de Montréal

Person 6

Michael Rabbat

Facebook, McGill

Person 6

Reza Babanezhad

Samsung

Person 6

Tim Hoheisel

McGill

Person 6

Marwa El Halabi

Samsung

Person 6

Margarida Carvalho

Université de Montréal

Postdocs and students

All students are welcome to attend our meetings. The list below includes students who have already presented their work at the group, or have helped organize.
Person 4

Sharan Vaswani

Postdoc, University of Alberta

Person 3

Nicolas Loizou

Postdoc, Université de Montréal

Person 1

Manuela Girotti

Postdoc, UdeM, Assistant Prof. Concordia

Person 1

Yakov Vaisbourd

Postdoc, McGill

Person 1

Kartik Ahuja

Postdoc, Mila, UdeM

Person 5

Reyhane Askari

PhD student, Université de Montréal

Person 2

Mariana Oliveira Prazeres

PhD student, McGill

Person 5

Adam Ibrahim

PhD student, Université de Montréal

Person 1

Brady Neal

PhD student, Université de Montréal

Person 2

Aristide Baratin

PhD student, Université de Montréal

Person 1

Mido Assran

PhD student, McGill

Person 3

Ryan D'Orazio

PhD student, UdeM

Person 3

Charles Guille-Escuret

PhD Student at UdeM

Person 3

Baptiste Goujaud

PhD Ecole Polytechnique

Person 3

Kiwon Lee

Mcgill University

Person 5

Mansi Rankawat

PhD student, Université de Montréal

Person 5

Rozhin Nobahari

M.Sc. student, Université de Montréal

Person 5

Divyat Mahajan

M.Sc. student, Université de Montréal

person 5

Mehrnaz Mofakhami

M.Sc. student, Université de Montréal

Alumni

Person 2

Maxime Laborde

Maître de conférence at University of Paris

Person 1

Chris Finlay

Research scientist at Deep Render
Person 6

Waïss Azizian

MSc student, ENS Paris
Person 6

Sanae Lotfi

PhD student, NYU, DeepMind Fellow
Person 3

Giulia Zarpellon

PhD student, Polytechnique Montréal

Person 3

Dominic Richards

PhD Student, University of Oxford

Person 3

Gabriel Rioux

PhD Student, Cornell

Talks from past seasons



Winter 2020

March 27th, Chris Finlay, How to train your Neural ODE (paper)
March 13th, Ryan D'Orazio, Alternative Function Approximation Parameterizations for Solving Games (paper)
Feb. 28th, Mido Assran, On the convergence of Nesterov's Accelerated Gradient Method in stochastic settings
Feb. 14th, Damien Scieur, Acceleration through spectral density estimation V2 + Universal average-case optimality of Polyak momentum,

Fall 2019

  • Acceleration through spectral density estimation, Fabian Pedregosa, 2019/12/06
  • Adaptive regularization with inexact gradient for machine learning, Sanae Lotfi, 2019/11/22
  • Cross-over session with Mila's RL reading group, 2019/11/08
  • Stochastic Optimization, Nicolas Loizou, 2019/10/25
  • Path Length Bounds for Gradient Descent and Flow (paper), Aaditya Ramdas, 2019/10/11
  • Implicit regularization in deep learning: a view from function space, Aristide Baratin, 2019/10/11
  • The Bias-Variance tradeoff in neural networks (paper), Brady Neal, 2019/09/13

Summer 2019

  • Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates (paper), Sharan Vaswani, 2019/08/30
  • From maximum likelihood to classification accuracy, Nicolas Le Roux, 2019/08/16
  • The duality gap technique (Diakonikolas, Orecchia), presented by Damien Scieur, 2019/08/02
  • Extragradient and current questions on game optimization, Waïss Azizian, 2019/07/19
  • Lower bounds and Conditioning of Differentiable Games (paper), Adam Ibrahim, 2019/07/05
  • ODE for Nesterov acceleration stochastic case, Maxime Laborde, 2019/07/05
  • Vector field perspective on GANs, Gauthier Gidel, 2019/06/21
  • Methods for adaptive SGD, Mariana Oliveira Prazeres, 2019/06/21