Follow
Lingpeng Kong
Lingpeng Kong
Google DeepMind, The University of Hong Kong
Verified email at cs.hku.hk - Homepage
Title
Cited by
Cited by
Year
A Dependency Parser for Tweets
L Kong, N Schneider, S Swayamdipta, A Bhatia, C Dyer, NA Smith
EMNLP 2014, 2014
2862014
Dynet: The dynamic neural network toolkit
G Neubig, C Dyer, Y Goldberg, A Matthews, W Ammar, A Anastasopoulos, ...
arXiv preprint arXiv:1701.03980, 2017
2612017
Random feature attention
H Peng, N Pappas, D Yogatama, R Schwartz, NA Smith, L Kong
arXiv preprint arXiv:2103.02143, 2021
2062021
What do recurrent neural network grammars learn about syntax?
A Kuncoro, M Ballesteros, L Kong, C Dyer, G Neubig, NA Smith
arXiv preprint arXiv:1611.05774, 2016
1402016
Episodic memory in lifelong language learning
C de Masson D'Autume, S Ruder, L Kong, D Yogatama
Advances in Neural Information Processing Systems 32, 2019
1372019
Segmental recurrent neural networks
L Kong, C Dyer, NA Smith
arXiv preprint arXiv:1511.06018, 2015
1292015
Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models
T Xie, CH Wu, P Shi, R Zhong, T Scholak, M Yasunaga, CS Wu, M Zhong, ...
arXiv preprint arXiv:2201.05966, 2022
892022
cosformer: Rethinking softmax in attention
Z Qin, W Sun, H Deng, D Li, Y Wei, B Lv, J Yan, L Kong, Y Zhong
arXiv preprint arXiv:2202.08791, 2022
872022
Distilling an ensemble of greedy dependency parsers into one MST parser
A Kuncoro, M Ballesteros, L Kong, C Dyer, NA Smith
arXiv preprint arXiv:1609.07561, 2016
802016
Diffuseq: Sequence to sequence text generation with diffusion models
S Gong, M Li, J Feng, Z Wu, LP Kong
arXiv preprint arXiv:2210.08933, 2022
742022
Segmental recurrent neural networks for end-to-end speech recognition
L Lu, L Kong, C Dyer, NA Smith, S Renals
arXiv preprint arXiv:1603.00223, 2016
732016
Adaptive semiparametric language models
D Yogatama, C de Masson d’Autume, L Kong
Transactions of the Association for Computational Linguistics 9, 362-373, 2021
712021
Learning and evaluating general linguistic intelligence
D Yogatama, CM d'Autume, J Connor, T Kocisky, M Chrzanowski, L Kong, ...
arXiv preprint arXiv:1901.11373, 2019
632019
A contrastive framework for neural text generation
Y Su, T Lan, Y Wang, D Yogatama, L Kong, N Collier
Advances in Neural Information Processing Systems 35, 21548-21561, 2022
582022
Document context language models
Y Ji, T Cohn, L Kong, C Dyer, J Eisenstein
arXiv preprint arXiv:1511.03962, 2015
412015
Bayesian Optimization of Text Representations
D Yogatama, L Kong, NA Smith
Proceedings of the Conference on Empirical Methods in Natural Language …, 2015
412015
Cyprien de Masson d’Autume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama. 2020. A mutual information maximization perspective of language representation learning
L Kong
8th International Conference on Learning Representations, ICLR, 26-30, 2020
392020
Zerogen: Efficient zero-shot learning via dataset generation
J Ye, J Gao, Q Li, H Xu, J Feng, Z Wu, T Yu, L Kong
arXiv preprint arXiv:2202.07922, 2022
382022
End-to-end neural segmental models for speech recognition
H Tang, L Lu, L Kong, K Gimpel, K Livescu, C Dyer, NA Smith, S Renals
IEEE Journal of Selected Topics in Signal Processing 11 (8), 1254-1264, 2017
372017
SyntaxNet models for the CoNLL 2017 shared task
C Alberti, D Andor, I Bogatyy, M Collins, D Gillick, L Kong, T Koo, J Ma, ...
arXiv preprint arXiv:1703.04929, 2017
362017
The system can't perform the operation now. Try again later.
Articles 1–20