Aleksandr Beznosikov
Aleksandr Beznosikov
Verified email at - Homepage
Cited by
Cited by
On biased compression for distributed learning
A Beznosikov, S Horváth, P Richtárik, M Safaryan
NeurIPS 2020, Workshop on Scalability, Privacy, and Security in Federated …, 2020
Decentralized distributed optimization for saddle point problems
A Rogozin, A Beznosikov, D Dvinskikh, D Kovalev, P Dvurechensky, ...
arXiv preprint arXiv:2102.07758, 2021
Derivative-free method for composite optimization with applications to decentralized distributed optimization
A Beznosikov, E Gorbunov, A Gasnikov
IFAC-PapersOnLine 53 (2), 4038-4043, 2020
Recent theoretical advances in decentralized distributed convex optimization
E Gorbunov, A Rogozin, A Beznosikov, D Dvinskikh, A Gasnikov
High-Dimensional Optimization and Probability: With a View Towards Data …, 2022
Distributed Saddle-Point Problems: Lower Bounds, Optimal and Robust Algorithms
A Beznosikov, V Samokhin, A Gasnikov
arXiv preprint arXiv:2010.13112, 2021
Gradient-free methods with inexact oracle for convex-concave stochastic saddle-point problem
A Beznosikov, A Sadiev, A Gasnikov
Mathematical Optimization Theory and Operations Research: 19th International …, 2020
Solving smooth min-min and min-max problems by mixed oracle algorithms
E Gladin, A Sadiev, A Gasnikov, P Dvurechensky, A Beznosikov, ...
Mathematical Optimization Theory and Operations Research: Recent Trends …, 2021
Decentralized local stochastic extra-gradient for variational inequalities
A Beznosikov, P Dvurechensky, A Koloskova, V Samokhin, SU Stich, ...
arXiv preprint arXiv:2106.08315, 2021
Distributed saddle-point problems under data similarity
A Beznosikov, G Scutari, A Rogozin, A Gasnikov
Advances in Neural Information Processing Systems 34, 8172-8184, 2021
Zeroth-order algorithms for smooth saddle-point problems
A Sadiev, A Beznosikov, P Dvurechensky, A Gasnikov
Mathematical Optimization Theory and Operations Research: Recent Trends …, 2021
Stochastic gradient descent-ascent: Unified theory and new efficient methods
A Beznosikov, E Gorbunov, H Berard, N Loizou
arXiv preprint arXiv:2202.07262, 2022
Optimal algorithms for decentralized stochastic variational inequalities
D Kovalev, A Beznosikov, A Sadiev, M Persiianov, P Richtárik, ...
arXiv preprint arXiv:2202.02771, 2022
Optimal gradient sliding and its application to optimal distributed optimization under similarity
D Kovalev, A Beznosikov, ED Borodich, A Gasnikov, G Scutari
Advances in Neural Information Processing Systems, 2022
Near-optimal decentralized algorithms for saddle point problems over time-varying networks
A Beznosikov, A Rogozin, D Kovalev, A Gasnikov
Optimization and Applications: 12th International Conference, OPTIMA 2021 …, 2021
The power of first-order smooth optimization for black-box non-smooth problems
BG Alexander Gasnikov, Anton Novitskii, Vasilii Novitskii, Farshed ...
International Conference on Machine Learning, 7241-7265, 2022
Decentralized personalized federated min-max problems
E Borodich, A Beznosikov, A Sadiev, V Sushko, N Savelyev, M Takáč, ...
arXiv preprint arXiv:2106.07289, 2021
Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems--Survey
A Beznosikov, B Polyak, E Gorbunov, D Kovalev, A Gasnikov
arXiv preprint arXiv:2208.13592, 2022
Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes
A Sadiev, E Borodich, A Beznosikov, D Dvinskikh, S Chezhegov, ...
EURO Journal on Computational Optimization 10, 100041, 2022
Distributed methods with compressed communication for solving variational inequalities, with theoretical guarantees
A Beznosikov, P Richtárik, M Diskin, M Ryabinin, A Gasnikov
arXiv preprint arXiv:2110.03313, 2021
One-point gradient-free methods for composite optimization with applications to distributed optimization
I Stepanov, A Voronov, A Beznosikov, A Gasnikov
arXiv preprint arXiv:2107.05951, 2021
The system can't perform the operation now. Try again later.
Articles 1–20