Adaptive Online Gradient Descent

Loading...
Thumbnail Image

Related Collections

Degree type

Discipline

Subject

Computer Sciences
Statistics and Probability

Funder

Grant number

License

Copyright date

Distributor

Related resources

Contributor

Abstract

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between √T and log T. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

Advisor

Date Range for Data Collection (Start Date)

Date Range for Data Collection (End Date)

Digital Object Identifier

Series name and number

Publication date

2007-06-14

Volume number

Issue number

Publisher

Publisher DOI

relationships.isJournalIssueOf

Comments

Recommended citation

Collection