Tuo Zhao
Tuo Zhao
Assistant Professor, Georgia Tech
Verified email at - Homepage
Cited by
Cited by
Patterns and rates of exonic de novo mutations in autism spectrum disorders
BM Neale, Y Kou, L Liu, A Ma’Ayan, KE Samocha, A Sabo, CF Lin, ...
Nature 485 (7397), 242-245, 2012
The huge package for high-dimensional undirected graph estimation in R
T Zhao, H Liu, K Roeder, J Lafferty, L Wasserman
The Journal of Machine Learning Research 13 (1), 1059-1062, 2012
SMART: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization
H Jiang, P He, W Chen, X Liu, J Gao, T Zhao
arXiv preprint arXiv:1911.03437 (ACL), 2019
Transformer Hawkes Process
S Zuo, H Jiang, Z Li, T Zhao, H Zha
International conference on machine learning, 11692-11702, 2020
BOND: Bert-assisted open-domain named entity recognition with distant supervision
C Liang, Y Yu, H Jiang, S Er, R Wang, T Zhao, C Zhang
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Q Zhang, M Chen, A Bukharin, P He, Y Cheng, W Chen, T Zhao
arXiv preprint arXiv:2303.10512 (ICLR), 2023
A nonconvex optimization framework for low rank matrix estimation
T Zhao, Z Wang, H Liu
Advances in Neural Information Processing Systems 28, 2015
Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging
A Eloyan, J Muschelli, MB Nebel, H Liu, F Han, T Zhao, AD Barber, S Joel, ...
Frontiers in systems neuroscience 6, 61, 2012
Deep hyperspherical learning
W Liu, YM Zhang, X Li, Z Yu, B Dai, T Zhao, L Song
Advances in neural information processing systems 30, 2017
The FLARE package for high dimensional linear regression and precision matrix estimation in R
X Li, T Zhao, X Yuan, H Liu
Journal of Machine Learning Research, 2015
Efficient approximation of deep ReLU networks for functions on low dimensional manifolds
M Chen, H Jiang, W Liao, T Zhao
Advances in neural information processing systems 32, 2019
Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach
Y Yu, S Zuo, H Jiang, W Ren, T Zhao, C Zhang
arXiv preprint arXiv:2010.07835 (NAACL), 2020
Symmetry, saddle points, and global optimization landscape of nonconvex matrix factorization
X Li, J Lu, R Arora, J Haupt, H Liu, Z Wang, T Zhao
IEEE Transactions on Information Theory 65 (6), 3489-3514, 2019
Differentiable top-k with optimal transport
Y Xie, H Dai, M Chen, B Dai, T Zhao, H Zha, W Wei, T Pfister
Advances in Neural Information Processing Systems 33, 20520-20531, 2020
Nonconvex sparse learning via stochastic optimization with progressive variance reduction
X Li, R Arora, H Liu, J Haupt, T Zhao
arXiv preprint arXiv:1605.02711 (ICML), 2016
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks?---A Neural Tangent Kernel Perspective
K Huang, Y Wang, M Tao, T Zhao
Advances in neural information processing systems 33, 2698-2709, 2020
Nonparametric regression on low-dimensional manifolds using deep ReLU networks: Function approximation and statistical recovery
M Chen, H Jiang, W Liao, T Zhao
arXiv preprint arXiv:1908.01842 (IMA Information and Inference), 2019
Toward understanding the importance of noise in training neural networks
M Zhou, T Liu, Y Li, D Lin, E Zhou, T Zhao
International Conference on Machine Learning, 7594-7602, 2019
Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data
M Chen, K Huang, T Zhao, M Wang
arXiv preprint arXiv:2302.07194 (ICML), 2023
Taming sparsely activated transformer with stochastic experts
S Zuo, X Liu, J Jiao, YJ Kim, H Hassan, R Zhang, T Zhao, J Gao
arXiv preprint arXiv:2110.04260 (ICLR), 2021
The system can't perform the operation now. Try again later.
Articles 1–20