Speaker: El Mahdi El Mhamdi
Abstract
We will introduce the problem of securely training AI models in the presence of malicious actors during the training phase, those can use corrupt data, disinformation through fake accounts and social media astroturfing or worse, compromised machines, to influence the outcome of training an AI model. Important solutions to secure the training of AI models have been proposed by the statistics and machine learning communities and will be reviewed. Some of these, including our own, have been listed by a recent report from the National Institute of Standards and Technology as the state of the art to prevent model poisoning, but this talk will provide a few mathematical insights on why most of these solutions fail in the face of larger and larger AI models, arguing for an inevitable impossibility of securing AI models without limiting their number of parameters.
Share