Подписаться
Mostafa Dehghani
Mostafa Dehghani
Research Scientist, Google DeepMind
Подтвержден адрес электронной почты в домене google.com - Главная страница
Название
Процитировано
Процитировано
Год
An image is worth 16x16 words: Transformers for image recognition at scale
A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ...
arXiv preprint arXiv:2010.11929, 2020
396512020
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
21192024
Vivit: A video vision transformer
A Arnab*, M Dehghani*, G Heigold, C Sun, M Lučić, C Schmid
arXiv preprint arXiv:2103.15691, 2021
20402021
Efficient Transformers survey
DM Yi Tay, Mostafa Dehghani, Dara Bahri
ACM Computing Survey 55 (6), 1–28, 2022
1165*2022
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
10452023
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
9912023
Universal Transformers
M Dehghani, S Gouws, O Vinyals, J Uszkoreit, Ł Kaiser
International Conference on Learning Representations (ICLR), 2019
9342019
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
5622020
Neural Ranking Models with Weak Supervision
M Dehghani, H Zamani, A Severyn, J Kamps, WB Croft
The 40th International ACM SIGIR Conference on Research and Development in …, 2017
4112017
Scaling vision transformers to 22 billion parameters
M Dehghani, J Djolonga, B Mustafa, P Padlewski, J Heek, J Gilmer, ...
International Conference on Machine Learning, 7480-7512, 2023
3362023
Simple open-vocabulary object detection
M Minderer, A Gritsenko, A Stone, M Neumann, D Weissenborn, ...
European Conference on Computer Vision, 728-755, 2022
3272022
Metnet: A neural weather model for precipitation forecasting
CK Sønderby, L Espeholt, J Heek, M Dehghani, A Oliver, T Salimans, ...
arXiv preprint arXiv:2003.12140, 2020
3032020
Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks
RK Mahabadi, S Ruder, M Dehghani, J Henderson
arXiv preprint arXiv:2106.04489, 2021
2382021
Ul2: Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
arXiv preprint arXiv:2205.05131, 2022
2062022
From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing
H Zamani, M Dehghani, WB Croft, E Learned-Miller, J Kamps
Proceedings of the 27th ACM international conference on information and …, 2018
1842018
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
1822024
Transformer memory as a differentiable search index
Y Tay, V Tran, M Dehghani, J Ni, D Bahri, H Mehta, Z Qin, K Hui, Z Zhao, ...
Advances in Neural Information Processing Systems 35, 21831-21843, 2022
1822022
Tokenlearner: Adaptive space-time tokenization for videos
M Ryoo, AJ Piergiovanni, A Arnab, M Dehghani, A Angelova
Advances in neural information processing systems 34, 12786-12797, 2021
1322021
Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, D Bahri, T Schuster, HS Zheng, ...
arXiv preprint arXiv:2205.05131 10, 2022
1292022
Learning to Attend, Copy, and Generate for Session-Based Query Suggestion
M Dehghani, S Rothe, E Alfonseca, P Fleury
International Conference on Information and Knowledge Management (CIKM'17), 2017
1232017
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20