Talks schedule

Year 2024

Nov 20, 13:00 Mohammad Pedramfar From Linear to Linearizable Optimization Mila Auditorium 2
Nov 13, 12:30 Alireza Mousavi-Hosseini Learning and Optimization with the Mean-Field Langevin Dynamics Mila Auditorium 1
Oct 30, 13:00 Lucas Maes and Helen (Tianyue) Zhang Understanding Adam Requires Better Rotation Dependent Assumptions Mila Auditorium 2
Jul 17, 13:30 Cristóbal Guzmán The Role of Sparsity on Differentially Private Learning Mila Auditorium 2
Jun 26, 13:00 Samuel Vaiter (Automatic) Iterative Differentiation: some old (& new) results Mila Auditorium 2
Jun 19, 13:00 El Mahdi El Mhamdi On the security of large AI models Mila Auditorium 1 (online speaker)
Jun 11, 13:00 Tristan Deleu Discrete Probabilistic Inference as Control in Multi-path Environments Mila Auditorium 1
May 31, 13:00 Tara Akhound-Sadegh and Jarrid Rector-Brooks Iterated Denoising Energy Matching for Sampling from Boltzmann Densities Mila Auditorium 2
Apr 26, 13:00 Eliott Paquette Random matrix theory for high dimensional optimization, and an application to scaling laws Microsoft Research Labs
Apr 17, 13:00 Gauthier Gidel On the stability of iterative retraining of generative models on their own data Microsoft Research Labs
Apr 03, 13:00 Adam Oberman Theoretical insights into self-supervised feature representation learning. Microsoft Research Labs
Mar 20, 13:00 Jose Gallego-Posada On PI controllers for updating Lagrange multipliers in constrained optimization Microsoft Research Labs
Mar 06, 13:00 Motahareh Sohrabi Weight-Sharing Regularization Mila, Auditorium 2
Feb 21, 13:00 Louis Fournier Can Forward Gradient Match Backpropagation? Mila, Auditorium 2
Feb 09, 13:00 Michael Rabbat Benchmarking Neural Network Training Algorithms Microsoft Research Labs

Year 2023

Dec 07, 13:00 Juan Ramirez Mitigating the Disparate Impact of Pruning with Constrained Optimization Mila, Agora
Nov 16, 13:00 Charles Guille-Escuret No Wrong Turns: The Simple Geometry Of Neural Networks Optimization Paths Mila, Agora
Oct 19, 13:30 Courtney Paquette Hitting the High-D(imensional) Notes: An ODE for SGD learning dynamics Microsoft Research Labs
Jul 19, 13:00 Bonaventure Dossou Bridging Linguistic Frontiers: Machine Learning & NLP Innovations Empowering African Languages: Challenges, Progress, and Promising Futures Auditorium 1
Jul 12, 13:30 Tiffany Vlaar Constrained and Multirate Training of Neural Networks Auditorium 1
Jul 05, 13:30 Guillaume Huguet The heat operator for dimensionality reduction and optimal transport Auditorium 1
Jun 28, 13:30 Lyle Kim Adaptive Federated Learning with Auto-Tuned Clients Auditorium 1
Jun 14, 13:30 Nicolas Le roux Recent advances in functional optimization Auditorium 2
May 31, 13:30 Tristan Deleu Causal discovery with GFlowNets Auditorium 2
May 03, 13:30 Sébastien Lachapelle Discovering Latent Structures from Data Auditorium 1
Apr 19, 13:30 Arna Gosh alpha-ReQ : Assessing Representation Quality in Self-Supervised Learning by measuring eigenspectrum decay Mila, Agora
Mar 29, 13:30 Alexia Jolicoeur-Martineau PopulAtion Parameter Averaging (PAPA) Auditorium 1
Mar 15, 13:30 Alex Tong Conditional Flow Matching: Simulation-Free Dynamic Optimal Transport Auditorium 1
Feb 14, 10:30 Marwa El Halabi Difference of submodular minimization via DC programming Auditorium 2

Year 2021

Jul 14, 16:00 Mark Sellke A Universal Law of Robustness via Isoperimetry Room H.07
Jun 16, 16:00 Jelena Diakonikolas Structure in Min-Max Optimization (and How to Use It!) Room H.07
Jun 02, 16:00 Tong Zhang Why Cosine Learning Rate Scheduler Works and How to Improve It Room H.07
May 30, 16:00 Ludwig Schmidt Evaluating Machine (Human) Accuracy and Robustness on ImageNet Room H.07
May 21, 16:00 Lenka Zdeborova Insights on Gradient-based algorithms in high-dimensional non-convex optimisation Room H.07
May 07, 16:00 Aaron Defazio Why are we still using SGD in the 21st century? Adventures in large-scale deep learning Room H.07
May 05, 16:00 Vasilis Syrgkanis Adversarial machine learning and instrumental variables for flexible causal modeling Room H.07
Mar 24, 16:00 Sebastien Bubeck A law of robustness for two-layers neural networks Room H.07
Mar 10, 16:00 Margarida Carvalho Combinatorial optimization for games Room H.07
Feb 24, 16:00 Anastasios Kyrillidis Distributed learning of deep neural networks using independent subnet training Room H.07
Feb 10, 16:00 Lorenzo Rosasco An implicit tour of regularization Room H.07
Jan 27, 16:00 Robert M. Gower New Viewpoints, Variants and Convergence Theory for Stochastic Polyak Stepsizes Room H.07
Jan 13, 16:00 Panayotis Mertikopoulos Dynamics, (min-max) optimization, and games Room H.07

Year 2020

Dec 02, 16:00 Nicolas Loizou SGD for Modern Machine Learning, Practical Variants and Convergence Guarantees Virtual
Nov 04, 16:00 Ashia Wilson Variational Perspectives on Machine Learning, Algorithms, Inference, and Fairness Virtual
Oct 07, 16:00 Karolina Dziugaite Distribution-dependent generalization bounds for noisy, iterative learning algorithms Virtual
Sep 23, 16:00 Aude Genevay Learning with entropy-regularized optimal transport Virtual
Sep 09, 16:00 Geoffrey Negiar Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization Virtual
Aug 26, 16:00 Hanie Sedghi What is being transferred in transfer learning? Virtual
Aug 12, 16:00 Kamalika Chaudhuri Challenges in Reliable Machine Learning Virtual
Aug 07, 16:00 Rachel Ward Weighted Optimization, better generalization by smoother interpolation Virtual
Jul 31, 16:00 Costis Daskalakis The Complexity of Min-Max Optimization Virtual
Jul 03, 16:00 Rachael Tappenden Accelerated Gradient Methods with Optimality Certificates Virtual
Jun 05, 16:00 Francis Bach On the effectiveness of Richardson Extrapolation in Machine Learning Virtual
May 22, 16:00 Tim Hoheisel Cone-Convexity and Composite Functions Virtual
May 08, 16:00 Peter Richtarik On Second Order Methods and Randomness Virtual
May 01, 16:00 Adam Oberman Accelerated stochastic gradient descent, convergence rate and empirical results Virtual
Apr 17, 16:00 Mark Schmidt Faster Algorithms for Deep Learning? Virtual