Follow
Wenqi Shao
Wenqi Shao
Researcher at Shanghai AI Laboratory
Verified email at pjlab.org.cn - Homepage
Title
Cited by
Cited by
Year
Towards Understanding Regularization in Batch Normalization
P Luo*, X Wang*, W Shao*, Z Peng (*Equal Contribution)
ICLR 2019, 2018
2672018
Sphinx: The joint mixing of weights, tasks, and visual embeddings for multi-modal large language models
Z Lin, C Liu, R Zhang, P Gao, L Qiu, H Xiao, H Qiu, C Lin, W Shao, ...
arXiv preprint arXiv:2311.07575, 2023
1862023
Gpt4roi: Instruction tuning large language model on region-of-interest
S Zhang, P Sun, S Chen, M Xiao, W Shao, W Zhang, Y Liu, K Chen, P Luo
arXiv preprint arXiv:2307.03601, 2023
1792023
Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models
P Xu, W Shao, K Zhang, P Gao, S Liu, M Lei, F Meng, S Huang, Y Qiao, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
1662024
Omniquant: Omnidirectionally calibrated quantization for large language models
W Shao, M Chen, Z Zhang, P Xu, L Zhao, Z Li, K Zhang, P Gao, Y Qiao, ...
arXiv preprint arXiv:2308.13137, 2023
1412023
What makes for end-to-end object detection?
P Sun, Y Jiang, E Xie, W Shao, Z Yuan, C Wang, P Luo
International Conference on Machine Learning, 9934-9944, 2021
1042021
Imagebind-llm: Multi-modality instruction tuning
J Han, R Zhang, W Shao, P Gao, P Xu, H Xiao, K Zhang, C Liu, S Wen, ...
arXiv preprint arXiv:2309.03905, 2023
972023
Sphinx-x: Scaling data and parameters for a family of multi-modal large language models
D Liu, R Zhang, L Qiu, S Huang, W Lin, S Zhao, S Geng, Z Lin, P Jin, ...
arXiv preprint arXiv:2402.05935, 2024
822024
SSN: Learning Sparse Switchable Normalization via SparsestMax
W Shao*, T Meng*, J Li, R Zhang, Y Li, X Wang, ...
International Journal of Computer Vision 128, 2107–2125, 2019
722019
Rethinking the pruning criteria for convolutional neural network
Z Huang, W Shao, X Wang, L Lin, P Luo
Advances in Neural Information Processing Systems 34, 16305-16318, 2021
572021
Differentiable Learning-to-Group Channels via Groupable Convolutional Neural Networks
Z Zhaoyang, L Jingyu, S Wenqi, P Zhanglin, Z Ruimao, W Xiaogang, ...
ICCV 2019, 2019
462019
Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi
K Ying, F Meng, J Wang, Z Li, H Lin, Y Yang, H Zhang, W Zhang, Y Lin, ...
arXiv preprint arXiv:2404.16006, 2024
442024
Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution
Z Zhaoyang, S Wenqi, G Jinwei, W Xiaogang, L Ping
ICML 2021, 2021
362021
Tiny lvlm-ehub: Early multimodal experiments with bard
W Shao, Y Hu, P Gao, M Lei, K Zhang, F Meng, P Xu, S Huang, H Li, ...
arXiv preprint arXiv:2308.03729, 2023
332023
DiffRate: Differentiable Compression Rate for Efficient Vision Transformers
M Chen, W Shao, P Xu, M Lin, K Zhang, F Chao, R Ji, Y Qiao, P Luo
ICCV23, arXiv preprint arXiv:2305.17997, 2023
332023
Beyond one-to-one: Rethinking the referring image segmentation
Y Hu, Q Wang, W Shao, E Xie, Z Li, J Han, P Luo
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
312023
Not All Models Are Equal: Predicting Model Transferability in a Self-challenging Fisher Space
W Shao, X Zhao, Y Ge, Z Zhang, L Yang, X Wang, Y Shan, P Luo
ECCV 2022, arXiv preprint arXiv:2207.03036, 2022
302022
Chartassisstant: A universal chart multimodal language model via chart-to-table pre-training and multitask instruction tuning
F Meng, W Shao, Q Lu, P Gao, K Zhang, Y Qiao, P Luo
arXiv preprint arXiv:2401.02384, 2024
292024
Tree-planner: Efficient close-loop task planning with large language models
M Hu, Y Mu, X Yu, M Ding, S Wu, W Shao, Q Chen, B Wang, Y Qiao, ...
arXiv preprint arXiv:2310.08582, 2023
282023
Differentiable Dynamic Normalization for Learning Deep Representation
P Luo, P Zhanglin, S Wenqi, Z Ruimao, R Jiamin, W Lingyun
ICML 2019, http://proceedings.mlr.press/v97/luo19a.html, 2019
282019
The system can't perform the operation now. Try again later.
Articles 1–20