Follow
Yassir Fathullah
Title
Cited by
Cited by
Year
Prompting large language models with speech recognition abilities
Y Fathullah, C Wu, E Lakomkin, J Jia, Y Shangguan, K Li, J Guo, W Xiong, ...
arXiv preprint arXiv:2307.11795, 2023
292023
Subsequence based deep active learning for named entity recognition
P Radmard, Y Fathullah, A Lipani
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
272021
Ensemble distillation approaches for grammatical error correction
Y Fathullah, MJF Gales, A Malinin
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
132021
Improved large-margin softmax loss for speaker diarisation
Y Fathullah, C Zhang, PC Woodland
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
102020
Cued at probsum 2023: Hierarchical ensemble of summarization models
P Manakul, Y Fathullah, A Liusie, V Raina, V Raina, M Gales
arXiv preprint arXiv:2306.05317, 2023
72023
Multi-head state space model for speech recognition
Y Fathullah, C Wu, Y Shangguan, J Jia, W Xiong, J Mahadeokar, C Liu, ...
arXiv preprint arXiv:2305.12498, 2023
6*2023
Self-distribution distillation: efficient uncertainty estimation
Y Fathullah, MJF Gales
Uncertainty in Artificial Intelligence, 663-673, 2022
52022
End-to-End Speech Recognition Contextualization with Large Language Models
E Lakomkin, C Wu, Y Fathullah, O Kalinli, ML Seltzer, C Fuegen
arXiv preprint arXiv:2309.10917, 2023
32023
Logit-based ensemble distribution distillation for robust autoregressive sequence uncertainties
Y Fathullah, G Xia, MJF Gales
Uncertainty in Artificial Intelligence, 582-591, 2023
32023
Teacher-Student Training for Debiasing: General Permutation Debiasing for Large Language Models
A Liusie, Y Fathullah, MJF Gales
arXiv preprint arXiv:2403.13590, 2024
2024
Who Needs Decoders? Efficient Estimation of Sequence-Level Attributes with Proxies
Y Fathullah, P Radmard, A Liusie, M Gales
Proceedings of the 18th Conference of the European Chapter of the …, 2024
2024
Towards General-Purpose Speech Abilities for Large Language Models Using Unpaired Data
Y Fathullah, C Wu, E Lakomkin, J Jia, Y Shangguan, J Mahadeokar, ...
arXiv preprint arXiv:2311.06753, 2023
2023
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Y Shangguan, H Yang, D Li, C Wu, Y Fathullah, D Wang, A Dalmia, ...
arXiv preprint arXiv:2309.01947, 2023
2023
Who Needs Decoders? Efficient Estimation of Sequence-level Attributes
Y Fathullah, P Radmard, A Liusie, MJF Gales
arXiv preprint arXiv:2305.05098, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–14