A conversion theorem and minimax optimality for continuum contextual bandits
Abstract
I will talk about the contextual continuum bandits problem, where the learner sequentially receives a side information vector and has to choose an action in a convex set, minimizing a function associated with the context. The goal is to minimize all the underlying functions for the received contexts, leading to a dynamic (contextual) notion of regret, which is stronger than the standard static regret. Assuming that the objective functions are Hölder continuous with respect to the contexts, I will show that any algorithm achieving a sub-linear static regret can be extended to achieve a sub-linear dynamic regret. I will also discuss the case of strongly convex and smooth functions when the observations are noisy. Inspired by the interior point method and employing self-concordant barriers, I will talk about our algorithm achieving a sub-linear dynamic regret. Lastly, I present a minimax lower bound, implying two key facts. First, no algorithm can achieve sub-linear dynamic regret over functions that are not continuous with respect to the context. Second, for strongly convex and smooth functions, the algorithm that we propose achieves the minimax optimal rate, up to a logarithmic factor, of dynamic regret as a function of the number of queries.