Speaker: Mohammad Pedramfar
Abstract
In this talk, we define the class of linearizable/quadratizable functions, a class that extends convex and continuous DR-submodular functions in various settings. We devise a general meta-algorithm to convert algorithms for linear/quadratic optimization into ones that optimize linearizable/quadratizable functions, offering a unified approach to tackling convex and DR-submodular optimization problems. We further discuss several meta-algorithms that extend these results to multiple feedback settings, including bandit and semi-bandit feedback. Leveraging this framework and using different base algorithms for online linear optimization, we improve upon state-of-the-art results in almost all cases considered. We also obtain first dynamic and adaptive regret guarantees for online continuous DR-submodular optimization.Work in collaboration with Vaneet Aggarwal (Purdue University).
Share