Aleksandr Beznosikov
Aleksandr Beznosikov
Verified email at - Homepage
Cited by
Cited by
On biased compression for distributed learning
A Beznosikov, S Horváth, P Richtárik, M Safaryan
arXiv preprint arXiv:2002.12410, 2020
Derivative-free method for composite optimization with applications to decentralized distributed optimization
A Beznosikov, E Gorbunov, A Gasnikov
arXiv preprint arXiv:1911.10645, 2019
Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
A Beznosikov, A Sadiev, A Gasnikov
International Conference on Mathematical Optimization Theory and Operations …, 2020
Recent theoretical advances in decentralized distributed convex optimization
E Gorbunov, A Rogozin, A Beznosikov, D Dvinskikh, A Gasnikov
arXiv preprint arXiv:2011.13259, 2020
Zeroth-Order Algorithms for Smooth Saddle-Point Problems
A Sadiev, A Beznosikov, P Dvurechensky, A Gasnikov
arXiv preprint arXiv:2009.09908, 2020
Local SGD for Saddle-Point Problems
A Beznosikov, V Samokhin, A Gasnikov
arXiv preprint arXiv:2010.13112, 2020
Solving smooth min-min and min-max problems by mixed oracle algorithms
E Gladin, A Sadiev, A Gasnikov, P Dvurechensky, A Beznosikov, ...
arXiv preprint arXiv:2103.00434, 2021
Linearly Convergent Gradient-Free Methods for Minimization of Symmetric Parabolic Approximation
A Bazarova, A Beznosikov, A Gasnikov
arXiv preprint arXiv:2009.04906, 2020
One-Point Gradient-Free Methods for Smooth and Non-Smooth Saddle-Point Problems
A Beznosikov, V Novitskii, A Gasnikov
arXiv preprint arXiv:2103.00321, 2021
Decentralized Distributed Optimization for Saddle Point Problems
AG A Rogozin, A Beznosikov, D Dvinskikh, D Kovalev, P Dvurechensky
arXiv preprint arXiv:2102.07758, 2021
The system can't perform the operation now. Try again later.
Articles 1–10