Optimal convex optimization under Tsybakov noise through reduction to active learning (Aarti SINGH-Carnegie Mellon University)

2013-04-16 2

Optimal convex optimization under Tsybakov noise through reduction to active learning
We demonstrate that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer, as quantified by a Tsybakov-like noise condition (TNC) with exponent k. Specifically, we demonstrate optimal first-order stochastic optimization rates that depend precisely on the TNC exponent k which include as special cases the classical rates for convex (k tending to infinity), strongly convex (k=2) and uniformly convex optimization (k > 2). Even faster rates (nearly exponential) can be attained if the exponent 1 < k < 2. Our analysis is based on a reduction of convex optimization to active learning as both problems involve minimizing the number of queries needed to identify the minimizer or the decision boundary, respectively,  by using feedback gained from prior queries. Our results demonstrate that the complexity of convex optimization in d-dimensions is precisely the same as the complexity of active learning in 1 dimension. First, we port a lower bound proof from active learning to demonstrate the complexity of optimizing convex function satisfying TNC, under both stochastic first-order oracle as well as derivative-free (zeroth order) setting. We then demonstrate that the optimal rate with TNC exponent under first-order oracle can be achieved by an epoch-gradient descent algorithm, as well as a coordinate descent algorithm that uses a 1-dimensional active learning algorithm as subroutine. We also show that it is possible to adapt to the unknown TNC exponent and hence the degree of convexity.

Free Traffic Exchange