A note on the optimal convergence rate of descent methods with fixed step sizes for smooth strongly convex functions

  • Based on a result by Taylor et al. (J Optim Theory Appl 178(2):455–476, 2018) on the attainable convergence rate of gradient descent for smooth and strongly convex functions in terms of function values, an elementary convergence analysis for general descent methods with fixed step sizes is presented. It covers general variable metric methods, gradient-related search directions under angle and scaling conditions, as well as inexact gradient methods. In all cases, optimal rates are obtained.

Download full text files

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:André UschmajewGND, Bart Vandereycken
URN:urn:nbn:de:bvb:384-opus4-1031139
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/103113
ISSN:0022-3239OPAC
ISSN:1573-2878OPAC
Parent Title (English):Journal of Optimization Theory and Applications
Publisher:Springer
Place of publication:Berlin
Type:Article
Language:English
Year of first Publication:2022
Publishing Institution:Universität Augsburg
Release Date:2023/03/23
Tag:Applied Mathematics; Management Science and Operations Research; Control and Optimization
Volume:194
Issue:1
First Page:364
Last Page:373
DOI:https://doi.org/10.1007/s10957-022-02032-z
Institutes:Mathematisch-Naturwissenschaftlich-Technische Fakultät
Mathematisch-Naturwissenschaftlich-Technische Fakultät / Institut für Mathematik
Mathematisch-Naturwissenschaftlich-Technische Fakultät / Institut für Mathematik / Lehrstuhl für Mathematical Data Science
Dewey Decimal Classification:5 Naturwissenschaften und Mathematik / 51 Mathematik / 510 Mathematik
Licence (German):CC-BY 4.0: Creative Commons: Namensnennung (mit Print on Demand)