Improved regret bounds for projection-free bandit convex optimization
International Conference on Artificial Intelligence and Statistics, 2020•proceedings.mlr.press
We revisit the challenge of designing online algorithms for the bandit convex optimization
problem (BCO) which are also scalable to high dimensional problems. Hence, we consider
algorithms that are\textit {projection-free}, ie, based on the conditional gradient method
whose only access to the feasible decision set, is through a linear optimization oracle (as
opposed to other methods which require potentially much more computationally-expensive
subprocedures, such as computing Euclidean projections). We present the first such …
problem (BCO) which are also scalable to high dimensional problems. Hence, we consider
algorithms that are\textit {projection-free}, ie, based on the conditional gradient method
whose only access to the feasible decision set, is through a linear optimization oracle (as
opposed to other methods which require potentially much more computationally-expensive
subprocedures, such as computing Euclidean projections). We present the first such …
Abstract
We revisit the challenge of designing online algorithms for the bandit convex optimization problem (BCO) which are also scalable to high dimensional problems. Hence, we consider algorithms that are\textit {projection-free}, ie, based on the conditional gradient method whose only access to the feasible decision set, is through a linear optimization oracle (as opposed to other methods which require potentially much more computationally-expensive subprocedures, such as computing Euclidean projections). We present the first such algorithm that attains expected regret using only overall calls to the linear optimization oracle, in expectation, where in the number of prediction rounds. This improves over the expected regret bound recently obtained by\cite {Karbasi19}, and actually matches the current best regret bound for projection-free online learning in the\textit {full information} setting.
proceedings.mlr.press
顯示最佳搜尋結果。 查看所有結果