Подписаться
Mingxuan Wang
Mingxuan Wang
ByteDance LLM Team
Подтвержден адрес электронной почты в домене bytedance.com - Главная страница
Название
Процитировано
Процитировано
Год
On the sentence embeddings from pre-trained language models
B Li, H Zhou, J He, M Wang, Y Yang, L Li
arXiv preprint arXiv:2011.05864, 2020
5952020
Deep semantic role labeling with self-attention
Z Tan, M Wang, J Xie, Y Chen, X Shi
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
3902018
Contrastive learning for many-to-many multilingual neural machine translation
X Pan, M Wang, L Wu, L Li
arXiv preprint arXiv:2105.09501, 2021
1672021
Towards Making the Most of BERT in Neural Machine Translation
JYMWH Zhou, CZWZY Yu, L Li
158*2020
Encoding source language with convolutional neural network for machine translation
F Meng, Z Lu, M Wang, H Li, W Jiang, Q Liu
arXiv preprint arXiv:1503.01838, 2015
1452015
Glancing transformer for non-autoregressive neural machine translation
L Qian, H Zhou, Y Bao, M Wang, L Qiu, W Zhang, Y Yu, L Li
arXiv preprint arXiv:2008.07905, 2020
1392020
Pre-training multilingual neural machine translation by leveraging alignment information
Z Lin, X Pan, M Wang, X Qiu, J Feng, H Zhou, L Li
arXiv preprint arXiv:2010.03142, 2020
1152020
Syntax-based deep matching of short texts
M Wang, Z Lu, H Li, Q Liu
arXiv preprint arXiv:1503.02427, 2015
1012015
A hierarchy-to-sequence attentional neural machine translation model
J Su, J Zeng, D Xiong, Y Liu, M Wang, J Xie
IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (3), 623-632, 2018
982018
Imitation learning for non-autoregressive neural machine translation
B Wei, M Wang, H Zhou, J Lin, J Xie, X Sun
arXiv preprint arXiv:1906.02041, 2019
942019
STEMM: Self-learning with speech-text manifold mixup for speech translation
Q Fang, R Ye, L Li, Y Feng, M Wang
arXiv preprint arXiv:2203.10426, 2022
802022
Learning language specific sub-network for multilingual machine translation
Z Lin, L Wu, M Wang, L Li
arXiv preprint arXiv:2105.09259, 2021
792021
Memory-enhanced decoder for neural machine translation
M Wang, Z Lu, H Li, Q Liu
arXiv preprint arXiv:1606.02003, 2016
792016
End-to-end speech translation via cross-modal progressive training
R Ye, M Wang, L Li
arXiv preprint arXiv:2104.10380, 2021
692021
Cross-modal contrastive learning for speech translation
R Ye, M Wang, L Li
arXiv preprint arXiv:2205.02444, 2022
682022
Learning shared semantic space for speech-to-text translation
C Han, M Wang, H Ji, L Li
arXiv preprint arXiv:2105.03095, 2021
672021
Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation
Q Dong, R Ye, M Wang, H Zhou, S Xu, B Xu, L Li
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12749 …, 2021
582021
Rethinking document-level neural machine translation
Z Sun, M Wang, H Zhou, C Zhao, S Huang, J Chen, L Li
arXiv preprint arXiv:2010.08961, 2020
562020
Deep neural machine translation with linear associative unit
M Wang, Z Lu, J Zhou, Q Liu
arXiv preprint arXiv:1705.00861, 2017
522017
LightSeq: A high performance inference library for transformers
X Wang, Y Xiong, Y Wei, M Wang, L Li
arXiv preprint arXiv:2010.13887, 2020
502020
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20