Подписаться
Hongyi Wang
Hongyi Wang
Подтвержден адрес электронной почты в домене andrew.cmu.edu - Главная страница
Название
Процитировано
Процитировано
Год
Federated Learning with Matched Averaging
H Wang, M Yurochkin, Y Sun, D Papailiopoulos, Y Khazaeni
ICLR 2020 - International Conference on Learning Representations, 2020
4462020
Atomo: Communication-efficient learning via atomic sparsification
H Wang, S Sievert, S Liu, Z Charles, D Papailiopoulos, S Wright
Advances in Neural Information Processing Systems 31, 2018
2582018
Draco: Byzantine-resilient distributed training via redundant gradients
L Chen, H Wang, Z Charles, D Papailiopoulos
International Conference on Machine Learning, 903-912, 2018
214*2018
Fedml: A research library and benchmark for federated machine learning
C He, S Li, J So, X Zeng, M Zhang, H Wang, X Wang, P Vepakomma, ...
arXiv preprint arXiv:2007.13518, 2020
208*2020
Attack of the tails: Yes, you really can backdoor federated learning
H Wang, K Sreenivasan, S Rajput, H Vishwakarma, S Agarwal, J Sohn, ...
Advances in Neural Information Processing Systems 33, 16070-16084, 2020
1912020
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
1262021
DETOX: A redundancy-based framework for faster and more robust gradient aggregation
S Rajput, H Wang, Z Charles, D Papailiopoulos
Advances in Neural Information Processing Systems 32, 2019
752019
ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding
H Wang, Z Charles, D Papailiopoulos
https://arxiv.org/abs/1901.09671, 2019
53*2019
The effect of network width on the performance of large-batch training
L Chen, H Wang, J Zhao, D Papailiopoulos, P Koutris
Advances in Neural Information Processing Systems 31, 2018
192018
Adaptive Gradient Communication via Critical Learning Regime Identification
S Agarwal, H Wang, K Lee, S Venkataraman, D Papailiopoulos
Proceedings of Machine Learning and Systems 3, 55-80, 2021
152021
On the utility of gradient compression in distributed training systems
S Agarwal, H Wang, S Venkataraman, D Papailiopoulos
Proceedings of Machine Learning and Systems 4, 652-672, 2022
122022
Pufferfish: communication-efficient models at no extra cost
H Wang, S Agarwal, D Papailiopoulos
Proceedings of Machine Learning and Systems 3, 365-386, 2021
122021
Recognizing actions during tactile manipulations through force sensing
G Subramani, D Rakita, H Wang, J Black, M Zinn, M Gleicher
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2017
52017
Rare Gems: Finding Lottery Tickets at Initialization
K Sreenivasan, J Sohn, L Yang, M Grinde, A Nagle, H Wang, K Lee, ...
arXiv preprint arXiv:2202.12002, 2022
32022
Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation
K Zhang, Y Wang, H Wang, L Huang, C Yang, L Sun
arXiv preprint arXiv:2203.09553, 2022
22022
Avoiding negative transfer on a focused task with deep multi-task reinforcement learning
YLAGS Liu, H Wang, Y Liang, A Gitter
22017
MPCFormer: fast, performant and private Transformer inference with MPC
D Li, R Shao, H Wang, H Guo, EP Xing, H Zhang
arXiv preprint arXiv:2211.01452, 2022
12022
AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness
D Li, H Wang, E Xing, H Zhang
arXiv preprint arXiv:2210.07297, 2022
2022
Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients
L Chen, L Chen, H Wang, S Davidson, E Dobriban
arXiv preprint arXiv:2110.01595, 2021
2021
Toward Robust and Communication Efficient Distributed Machine Learning
H Wang
The University of Wisconsin-Madison, 2021
2021
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20