Montréal Machine Learning and Optimization (MTL MLOpt) is a group of researchers living and working in Montréal, whose research loosely spans topics in machine learning and mathematical optimization. Most, though not all, members are affiliated with the Mila, where we also hold our physical meetings (in the pre-apocalyptic era). The group includes researchers from the University of Montréal, McGill, Google Brain, Samsung SAIT AI Lab (SAIL) Montreal, Facebook AI Research Montréal (FAIR) and Microsoft Research Montréal.

Our meetings typically comprise of a presentation of a member's latest work, followed by discussion which often leads to productive collaborations. Our meetings are open to the public. Join our mailing group to receive the latest meeting announcements.

Organizers for 2019-2020 meetings: Reyhane Askari, Nicolas Le Roux, Ioannis Mitliagkas

You can find public videos from our seminar here.

Schedule of talks

Our public talks are typically held on Wednesdays every two weeks.

Fall 2020

November 18th, Lorenzo Rosasco
November 4th, Ashia Wilson
October 21st, Nicolas Loizou
October 7th, Karolina Dziugaite
September 23rd, Aude Genevay
September 9th, Geoffrey Negiar

Summer 2020

August 26th, Hanie Sedghi
August 12th, Kamalika Chaudhuri
August 7th, Rachel Ward
July 31st, Costis Daskalakis
July 3rd, Rachael Tappenden, Accelerated Gradient Methods with Optimality Certificates [paper]
June 5th, Francis Bach, On the effectiveness of Richardson Extrapolation in Machine Learning [paper] [video]

Spring 2020

May 22nd, Tim Hoheisel, Cone-Convexity and Composite Functions [paper]
May 8th, Peter Richtarik , On Second Order Methods and Randomness [paper] [video]
May 1st, Adam Oberman , Accelerated stochastic gradient descent: convergence rate and empirical results [paper]
April 17th, Mark Schmidt, Faster Algorithms for Deep Learning? [video]

Winter 2020

March 27th, Chris Finlay, How to train your Neural ODE (paper)
March 13th, Ryan D'Orazio, Alternative Function Approximation Parameterizations for Solving Games (paper)
Feb. 28th, Mido Assran, On the convergence of Nesterov's Accelerated Gradient Method in stochastic settings
Feb. 14th, Damien Scieur, Acceleration through spectral density estimation V2 + Universal average-case optimality of Polyak momentum,

People

Professors and research scientists

Person 3

Ioannis Mitliagkas

Université de Montréal

Person 2

Nicolas Le Roux

Google Brain, McGill

Person 1

Simon Lacoste-Julien

Université de Montréal, Samsung

Person 4

Adam Oberman

McGill

Person 6

Fabian Pedregosa

Google Brain

Person 6

Damien Scieur

Samsung

Person 5

Courtney Paquette

Google Brain, McGill

Person 4

Gauthier Gidel

Université de Montréal

Person 6

Michael Rabbat

Facebook, McGill

Person 6

Reza Babanezhad

Samsung

Person 6

Tim Hoheisel

McGill

Postdocs and students

All students are welcome to attend our meetings. The list below includes students who have already presented their work at the group, or have helped organize.
Person 2

Maxime Laborde

Postdoc, McGill

Person 4

Sharan Vaswani

Postdoc, Mila

Person 3

Nicolas Loizou

Postdoc, Mila

Person 1

Yakov Vaisbourd

Postdoc, McGill

Person 5

Reyhane Askari

PhD student, Université de Montréal

Person 2

Mariana Oliveira Prazeres

PhD student, McGill

Person 5

Adam Ibrahim

PhD student, Université de Montréal

Person 1

Brady Neal

PhD student, Université de Montréal

Person 2

Aristide Baratin

PhD student, Université de Montréal

Person 1

Mido Assran

PhD student, McGill

Person 3

Ryan D'Orazio

MSc student, University of Alberta

Person 3

Dominic Richards

FAIR, University of Oxford

Alumni

Person 1

Chris Finlay

Postdoc, McGill

Now: research scientist at Deep Render
Person 4

Gauthier Gidel

PhD student, Université de Montréal

Now: Faculty, Mila/UdeM
Person 6

Waïss Azizian

Intern Mila

Now: ENS Paris
Person 6

Sanae Lotfi

MSc, Polytechnique Montreal

Now: PhD student, NYU
Person 3

Giulia Zarpellon

PhD student, Polytechnique Montréal

Talks from past seasons



Fall 2019

  • Acceleration through spectral density estimation, Fabian Pedregosa, 2019/12/06
  • Adaptive regularization with inexact gradient for machine learning, Sanae Lotfi, 2019/11/22
  • Cross-over session with Mila's RL reading group, 2019/11/08
  • Stochastic Optimization, Nicolas Loizou, 2019/10/25
  • Path Length Bounds for Gradient Descent and Flow (paper), Aaditya Ramdas, 2019/10/11
  • Implicit regularization in deep learning: a view from function space, Aristide Baratin, 2019/10/11
  • The Bias-Variance tradeoff in neural networks (paper), Brady Neal, 2019/09/13

Summer 2019

  • Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates (paper), Sharan Vaswani, 2019/08/30
  • From maximum likelihood to classification accuracy, Nicolas Le Roux, 2019/08/16
  • The duality gap technique (Diakonikolas, Orecchia), presented by Damien Scieur, 2019/08/02
  • Extragradient and current questions on game optimization, Waïss Azizian, 2019/07/19
  • Lower bounds and Conditioning of Differentiable Games (paper), Adam Ibrahim, 2019/07/05
  • ODE for Nesterov acceleration stochastic case, Maxime Laborde, 2019/07/05
  • Vector field perspective on GANs, Gauthier Gidel, 2019/06/21
  • Methods for adaptive SGD, Mariana Oliveira Prazeres, 2019/06/21