MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation J Hu, Y Liu, J Zhao, Q Jin arXiv preprint arXiv:2107.06779, 2021 | 199 | 2021 |
Multimodal multi-task learning for dimensional and continuous emotion recognition S Chen, Q Jin, J Zhao, S Wang Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, 19-26, 2017 | 169 | 2017 |
WenLan: Bridging vision and language by large-scale multi-modal pre-training Y Huo, M Zhang, G Liu, H Lu, Y Gao, G Yang, J Wen, H Zhang, B Xu, ... arXiv preprint arXiv:2103.06561, 2021 | 140 | 2021 |
Missing modality imagination network for emotion recognition with uncertain missing modalities J Zhao, R Li, Q Jin Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021 | 135 | 2021 |
Multi-modal multi-cultural dimensional continues emotion recognition in dyadic interactions J Zhao, R Li, S Chen, Q Jin Proceedings of the 2018 on audio/visual emotion challenge and workshop, 65-72, 2018 | 57 | 2018 |
Mer 2023: Multi-label learning, modality robustness, and semi-supervised learning Z Lian, H Sun, L Sun, K Chen, M Xu, K Wang, K Xu, Y He, Y Li, J Zhao, ... Proceedings of the 31st ACM International Conference on Multimedia, 9610-9614, 2023 | 54 | 2023 |
M3ED: Multi-modal multi-scene multi-label emotional dialogue database J Zhao, T Zhang, J Hu, Y Liu, Q Jin, X Wang, H Li arXiv preprint arXiv:2205.10237, 2022 | 48 | 2022 |
Memobert: Pre-training model with prompt-based learning for multimodal emotion recognition J Zhao, R Li, Q Jin, X Wang, H Li ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022 | 39 | 2022 |
Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities H Zuo, R Liu, J Zhao, G Gao, H Li ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023 | 36 | 2023 |
Adversarial domain adaption for multi-cultural dimensional emotion recognition in dyadic interactions J Zhao, R Li, J Liang, S Chen, Q Jin Proceedings of the 9th international on audio/visual emotion challenge and …, 2019 | 29 | 2019 |
Multi-modal emotion estimation for in-the-wild videos L Meng, Y Liu, X Liu, Z Huang, Y Cheng, M Wang, C Liu, Q Jin arXiv preprint arXiv:2203.13032, 2022 | 25 | 2022 |
Cross-culture multimodal emotion recognition with adversarial learning J Liang, S Chen, J Zhao, Q Jin, H Liu, L Lu ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019 | 22 | 2019 |
Mer 2024: Semi-supervised learning, noise robustness, and open-vocabulary multimodal emotion recognition Z Lian, H Sun, L Sun, Z Wen, S Zhang, S Chen, H Gu, J Zhao, Z Ma, ... Proceedings of the 2nd International Workshop on Multimodal and Responsible …, 2024 | 19 | 2024 |
Emotion recognition with multimodal features and temporal models S Wang, W Wang, J Zhao, S Chen, Q Jin, S Zhang, Y Qin Proceedings of the 19th ACM international conference on multimodal …, 2017 | 18 | 2017 |
DialogueEIN: Emotion interaction network for dialogue affective analysis Y Liu, J Zhao, J Hu, R Li, Q Jin Proceedings of the 29th International Conference on Computational …, 2022 | 16 | 2022 |
Multi-modal fusion for video sentiment analysis R Li, J Zhao, J Hu, S Guo, Q Jin Proceedings of the 1st International on Multimodal Sentiment Analysis in …, 2020 | 16 | 2020 |
Multi-task learning framework for emotion recognition in-the-wild T Zhang, C Liu, X Liu, Y Liu, L Meng, L Sun, W Jiang, F Zhang, J Zhao, ... European Conference on Computer Vision, 143-156, 2022 | 15 | 2022 |
Video interestingness prediction based on ranking model S Wang, S Chen, J Zhao, Q Jin Proceedings of the joint workshop of the 4th workshop on affective social …, 2018 | 15 | 2018 |
Speech Emotion Recognition via Multi-Level Cross-Modal Distillation. R Li, J Zhao, Q Jin Interspeech, 4488-4492, 2021 | 10 | 2021 |
Speech Emotion Recognition in Dyadic Dialogues with Attentive Interaction Modeling. J Zhao, S Chen, J Liang, Q Jin INTERSPEECH, 1671-1675, 2019 | 10 | 2019 |