Ming Ding
Ming Ding
Подтвержден адрес электронной почты в домене mails.tsinghua.edu.cn
GPT understands, too
X Liu, Y Zheng, Z Du, M Ding, Y Qian, Z Yang, J Tang
AI Open, 2023
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
J Qiu, Q Chen, Y Dong, J Zhang, H Yang, M Ding, K Wang, J Tang
KDD 2020, 2020
CogView: Mastering Text-to-Image Generation via Transformers
M Ding, Z Yang, W Hong, W Zheng, C Zhou, D Yin, J Lin, X Zou, Z Shao, ...
NeurIPS 2021, 2021
All nlp tasks are generation tasks: A general pretraining framework
Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang
ACL 2022, 2021
Glm-130b: An open bilingual pre-trained model
A Zeng, X Liu, Z Du, Z Wang, H Lai, M Ding, Z Yang, Y Xu, W Zheng, X Xia, ...
arXiv preprint arXiv:2210.02414, 2022
Cognitive graph for multi-hop reading comprehension at scale
M Ding, C Zhou, Q Chen, H Yang, J Tang
ACL 2019, 2019
Towards Knowledge-Based Recommender Dialog System
Q Chen, J Lin, Y Zhang, M Ding, Y Cen, H Yang, J Tang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
ProNE: Fast and Scalable Network Representation Learning
J Zhang, Y Dong, Y Wang, J Tang, M Ding
Proceedings of the 28th International Joint Conference on Artificial …, 2019
Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks
Q Lv*, M Ding*, Q Liu, Y Chen, W Feng, S He, C Zhou, J Jiang, Y Dong, ...
KDD 2021, 2021
Understanding Negative Sampling in Graph Representation Learning
Z Yang*, M Ding*, C Zhou, H Yang, J Zhou, J Tang
KDD 2020, 2020
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
W Hong*, M Ding*, W Zheng, X Liu, J Tang
ICLR 2023, 2022
CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers
M Ding, W Zheng, W Hong, J Tang
NeurIPS 2022, 2022
Semi-supervised learning on graphs with generative adversarial nets
M Ding, J Tang, J Zhang
Proceedings of the 27th ACM International Conference on Information and …, 2018
M6: A chinese multimodal pretrainer
J Lin, R Men, A Yang, C Zhou, M Ding, Y Zhang, P Wang, A Wang, ...
arXiv preprint arXiv:2103.00823, 2021
CogLTX: Applying BERT to Long Texts
M Ding, C Zhou, H Yang, J Tang
NeurIPS 2020, 2020
MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems
T Huang, Y Dong, M Ding, Z Yang, W Feng, X Wang, J Tang
KDD 2021, 2021
M6-ufc: Unifying multi-modal controls for conditional image synthesis
Z Zhang, J Ma, C Zhou, R Men, Z Li, M Ding, J Tang, J Zhou, H Yang
NeurIPS 2021, 2021
Wudaocorpora: A super large-scale chinese corpora for pre-training language models
S Yuan, H Zhao, Z Du, M Ding, X Liu, Y Cen, X Zou, Z Yang, J Tang
AI Open 2, 65-68, 2021
Controllable Generation from Pre-trained Language Models via Inverse Prompting
X Zou, D Yin, Q Zhong, M Ding, Z Yang, J Tang
arXiv preprint arXiv:2103.10685, 2021
Imagereward: Learning and evaluating human preferences for text-to-image generation
J Xu, X Liu, Y Wu, Y Tong, Q Li, M Ding, J Tang, Y Dong
NeurIPS 2023, 2023
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20