Подписаться
Nan Duan
Tech Fellow, StepFun | Senior Principal Researcher, Microsoft Research (2012-2024)
Подтвержден адрес электронной почты в домене microsoft.com - Главная страница
Название
Процитировано
Процитировано
Год
Codebert: A pre-trained model for programming and natural languages
Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ...
arXiv preprint arXiv:2002.08155, 2020
30662020
Graphcodebert: Pre-training code representations with data flow
D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu, L Zhou, N Duan, ...
arXiv preprint arXiv:2009.08366, 2020
11192020
Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training
G Li, N Duan, Y Fang, M Gong, D Jiang
Proceedings of the AAAI Conference on Artificial Intelligence 34 (07), 11336 …, 2020
9912020
Codexglue: A machine learning benchmark dataset for code understanding and generation
S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ...
arXiv preprint arXiv:2102.04664, 2021
9282021
Visual chatgpt: Talking, drawing and editing with visual foundation models
C Wu, S Yin, W Qi, X Wang, Z Tang, N Duan
arXiv preprint arXiv:2303.04671, 2023
7052023
Unixcoder: Unified cross-modal pre-training for code representation
D Guo, S Lu, N Duan, Y Wang, M Zhou, J Yin
arXiv preprint arXiv:2203.03850, 2022
6212022
K-adapter: Infusing knowledge into pre-trained models with adapters
R Wang, D Tang, N Duan, Z Wei, X Huang, G Cao, D Jiang, M Zhou
arXiv preprint arXiv:2002.01808, 2020
6172020
CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval and captioning
H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li
Neurocomputing 508, 293-304, 2022
5922022
Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training
W Qi, Y Yan, Y Gong, D Liu, N Duan, J Chen, R Zhang, M Zhou
arXiv preprint arXiv:2001.04063, 2020
5122020
Univl: A unified video and language pre-training model for multimodal understanding and generation
H Luo, L Ji, B Shi, H Huang, N Duan, T Li, J Li, T Bharti, M Zhou
arXiv preprint arXiv:2002.06353, 2020
5082020
scGPT: toward building a foundation model for single-cell multi-omics using generative AI
H Cui, C Wang, H Maan, K Pang, F Luo, N Duan, B Wang
Nature Methods 21 (8), 1470-1480, 2024
4082024
Agieval: A human-centric benchmark for evaluating foundation models
W Zhong, R Cui, Y Guo, Y Liang, S Lu, Y Wang, A Saied, W Chen, ...
arXiv preprint arXiv:2304.06364, 2023
4002023
Codexglue: A machine learning benchmark dataset for code understanding and generation
S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ...
arXiv preprint arXiv:2102.04664, 2021
395*2021
Question generation for question answering
N Duan, D Tang, P Chen, M Zhou
Proceedings of the 2017 conference on empirical methods in natural language …, 2017
3492017
Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation
Y Liang, N Duan, Y Gong, N Wu, F Guo, W Qi, M Gong, L Shou, D Jiang, ...
arXiv preprint arXiv:2004.01401, 2020
3442020
Clip4clip: An empirical study of clip for end to end video clip retrieval
H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li
arXiv preprint arXiv:2104.08860, 2021
3382021
Nüwa: Visual synthesis pre-training for neural visual world creation
C Wu, J Liang, L Ji, F Yang, Y Fang, D Jiang, N Duan
European conference on computer vision, 720-736, 2022
3322022
Critic: Large language models can self-correct with tool-interactive critiquing
Z Gou, Z Shao, Y Gong, Y Shen, Y Yang, N Duan, W Chen
arXiv preprint arXiv:2305.11738, 2023
3162023
Baize: An open-source chat model with parameter-efficient tuning on self-chat data
C Xu, D Guo, N Duan, J McAuley
arXiv preprint arXiv:2304.01196, 2023
3092023
Imagebert: Cross-modal pre-training with large-scale weak-supervised image-text data
D Qi, L Su, J Song, E Cui, T Bharti, A Sacheti
arXiv preprint arXiv:2001.07966, 2020
3062020
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20