Follow
Max Ryabinin
Max Ryabinin
Together AI
Verified email at together.ai - Homepage
Title
Cited by
Cited by
Year
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
16282023
Flexgen: High-throughput generative inference of large language models with a single gpu
Y Sheng, L Zheng, B Yuan, Z Li, M Ryabinin, B Chen, P Liang, C Ré, ...
International Conference on Machine Learning, 31094-31116, 2023
2982023
Petals: Collaborative inference and fine-tuning of large models
A Borzunov, D Baranchuk, T Dettmers, M Ryabinin, Y Belkada, ...
arXiv preprint arXiv:2209.01188, 2022
552022
Distributed Deep Learning in Open Collaborations
M Diskin*, A Bukhtiyarov*, M Ryabinin*, L Saulnier, Q Lhoest, A Sinitsin, ...
Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021
532021
Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts
M Ryabinin, A Gusev
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 3659–3672, 2020
512020
It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning
A Tikhonov*, M Ryabinin*
Findings of the ACL 2021, 3534–3546, 2021
382021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
M Ryabinin*, E Gorbunov*, V Plokhotnyuk, G Pekhimenko
Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021
352021
Distributed Inference and Fine-tuning of Large Language Models Over The Internet
A Borzunov, M Ryabinin, A Chumachenko, D Baranchuk, T Dettmers, ...
arXiv preprint arXiv:2312.08361, 2023
312023
Mind Your Format: Towards Consistent Evaluation of In-Context Learning Improvements
A Voronov, L Wolf, M Ryabinin
arXiv preprint arXiv:2401.06766, 2024
282024
Scaling Ensemble Distribution Distillation to Many Classes With Proxy Targets
M Ryabinin, A Malinin, M Gales
Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021
222021
RuCoLA: Russian corpus of linguistic acceptability
V Mikhailov, T Shamardina, M Ryabinin, A Pestova, I Smurov, E Artemova
arXiv preprint arXiv:2210.12814, 2022
212022
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
M Ryabinin, T Dettmers, M Diskin, A Borzunov
arXiv preprint arXiv:2301.11913, 2023
202023
Distributed methods with compressed communication for solving variational inequalities, with theoretical guarantees
A Beznosikov, P Richtárik, M Diskin, M Ryabinin, A Gasnikov
Advances in Neural Information Processing Systems 35, 14013-14029, 2022
182022
Sequoia: Scalable and Robust Speculative Decoding
Z Chen, A May, R Svirschevski, YH Huang, M Ryabinin, Z Jia, B Chen
Thirty-Eighth Annual Conference on Neural Information Processing Systems, 2024
17*2024
Secure Distributed Training at Scale
E Gorbunov, A Borzunov, M Diskin, M Ryabinin
International Conference on Machine Learning, 7679-7739, 2022
152022
Embedding Words in Non-Vector Space with Unsupervised Graph Learning
M Ryabinin, S Popov, L Prokhorenkova, E Voita
Empirical Methods in Natural Language Processing (EMNLP 2020), 7317–7331, 2020
102020
Training Transformers Together
A Borzunov, M Ryabinin, T Dettmers, Q Lhoest, L Saulnier, M Diskin, ...
Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track 176 …, 2022
92022
Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics
A Voronov, M Khoroshikh, A Babenko, M Ryabinin
Advances in Neural Information Processing Systems 36, 2024
7*2024
SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices
R Svirschevski, A May, Z Chen, B Chen, Z Jia, M Ryabinin
arXiv preprint arXiv:2406.02532, 2024
32024
Label Privacy in Split Learning for Large Models with Parameter-Efficient Training
P Zmushko, M Mansurov, R Svirschevski, D Kuznedelev, M Ryabinin, ...
1*2024
The system can't perform the operation now. Try again later.
Articles 1–20