Подписаться
Yangyang Shi
Yangyang Shi
Meta
Подтвержден адрес электронной почты в домене fb.com
Название
Процитировано
Процитировано
Год
Recurrent neural networks for language understanding.
K Yao, G Zweig, MY Hwang, Y Shi, D Yu
In Fourteenth Annual Conference of the International Speech Communication …, 2013
4022013
Spoken language understanding using long short-term memory neural networks
K Yao, B Peng, Y Zhang, D Yu, G Zweig, Y Shi
2014 IEEE Spoken Language Technology Workshop (SLT), 189-194, 2014
3942014
Torchaudio: Building blocks for audio and speech processing
YY Yang, M Hira, Z Ni, A Astafurov, C Chen, C Puhrsch, D Pollack, ...
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
1502022
Emformer: Efficient memory transformer based acoustic model for low latency streaming speech recognition
Y Shi, Y Wang, C Wu, CF Yeh, J Chan, F Zhang, D Le, M Seltzer
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
1432021
Contextual Spoken Language Understanding Using Recurrent Neural Networks
Y Shi, H Yao, Kaisheng, Chen, YC Pan, MY Hwang, B Peng
IEEE International Conference on Acoustics, Speech and Signal Processing, 2015
872015
Deep lstm based feature mapping for query classification
Y Shi, K Yao, L Tian, D Jiang
Proceedings of the 2016 Conference of the North American Chapter of the …, 2016
682016
Contextualized streaming end-to-end speech recognition with trie-based deep biasing and shallow fusion
D Le, M Jain, G Keren, S Kim, Y Shi, J Mahadeokar, J Chan, ...
arXiv preprint arXiv:2104.02194, 2021
662021
Llm-qat: Data-free quantization aware training for large language models
Z Liu, B Oguz, C Zhao, E Chang, P Stock, Y Mehdad, Y Shi, ...
arXiv preprint arXiv:2305.17888, 2023
642023
Streaming transformer-based acoustic models using self-attention with augmented memory
C Wu, Y Wang, Y Shi, CF Yeh, F Zhang
arXiv preprint arXiv:2005.08042, 2020
642020
Recurrent neural network language model adaptation with curriculum learning
Y Shi, M Larson, CM Jonker
Computer Speech & Language 33 (1), 136-154, 2015
492015
Towards recurrent neural networks language models with linguistic and contextual features
Y Shi, P Wiggers, CM Jonker
Thirteenth annual conference of the international speech communication …, 2012
492012
Weak-attention suppression for transformer based speech recognition
Y Shi, Y Wang, C Wu, C Fuegen, F Zhang, D Le, CF Yeh, ML Seltzer
arXiv preprint arXiv:2005.09137, 2020
272020
Knowledge distillation for recurrent neural network language modeling with trust regularization
Y Shi, MY Hwang, X Lei, H Sheng
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
262019
Dissecting user-perceived latency of on-device E2E speech recognition
Y Shangguan, R Prabhavalkar, H Su, J Mahadeokar, Y Shi, J Zhou, C Wu, ...
arXiv preprint arXiv:2104.02207, 2021
242021
Higher order iteration schemes for unconstrained optimization
Y Shi, P Pan
American Journal of Operations Research 1 (03), 73, 2011
242011
Mining effective negative training samples for keyword spotting
J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
222020
Region proposal network based small-footprint keyword spotting
J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie
IEEE Signal Processing Letters 26 (10), 1471-1475, 2019
222019
Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Y Wang, Y Shi, F Zhang, C Wu, J Chan, CF Yeh, A Xiao
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
162021
Recurrent Support Vector Machines For Slot Tagging In Spoken Language Understanding.
Y Shi, K Yao, H Chen, D Yu, YC Pan, MY Hwang
Proceedings of the 2016 Conference of the North American Chapter of the …, 2016
152016
Exploiting the succeeding words in recurrent neural network language models.
Y Shi, M Larson, P Wiggers, CM Jonker
In Fourteenth Annual Conference of the International Speech Communication …, 2013
152013
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20