Подписаться
Mostafa Dehghani
Mostafa Dehghani
Research Scientist, Google Brain
Подтвержден адрес электронной почты в домене google.com - Главная страница
Название
Процитировано
Процитировано
Год
An image is worth 16x16 words: Transformers for image recognition at scale
A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ...
arXiv preprint arXiv:2010.11929, 2020
163482020
Vivit: A video vision transformer
A Arnab*, M Dehghani*, G Heigold, C Sun, M Lučić, C Schmid
arXiv preprint arXiv:2103.15691, 2021
9242021
Universal Transformers
M Dehghani, S Gouws, O Vinyals, J Uszkoreit, Ł Kaiser
International Conference on Learning Representations (ICLR), 2019
6902019
Efficient transformers: A survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys 55 (6), 1-28, 2022
6372022
Neural Ranking Models with Weak Supervision
M Dehghani, H Zamani, A Severyn, J Kamps, WB Croft
The 40th International ACM SIGIR Conference on Research and Development in …, 2017
3632017
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
2772020
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
2132022
Metnet: A neural weather model for precipitation forecasting
CK Sønderby, L Espeholt, J Heek, M Dehghani, A Oliver, T Salimans, ...
arXiv preprint arXiv:2003.12140, 2020
197*2020
From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing
H Zamani, M Dehghani, WB Croft, E Learned-Miller, J Kamps
Proceedings of the 27th ACM international conference on information and …, 2018
1442018
Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks
RK Mahabadi, S Ruder, M Dehghani, J Henderson
arXiv preprint arXiv:2106.04489, 2021
1112021
Learning to Attend, Copy, and Generate for Session-Based Query Suggestion
M Dehghani, S Rothe, E Alfonseca, P Fleury
International Conference on Information and Knowledge Management (CIKM'17), 2017
1022017
Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, D Bahri, T Schuster, HS Zheng, ...
arXiv preprint arXiv:2205.05131, 2022
69*2022
Tokenlearner: Adaptive space-time tokenization for videos
M Ryoo, AJ Piergiovanni, A Arnab, M Dehghani, A Angelova
Advances in Neural Information Processing Systems 34, 12786-12797, 2021
682021
Tokenlearner: What can 8 learned tokens do for images and videos?
MS Ryoo, AJ Piergiovanni, A Arnab, M Dehghani, A Angelova
arXiv preprint arXiv:2106.11297, 2021
662021
Simple open-vocabulary object detection with vision transformers
M Minderer, A Gritsenko, A Stone, M Neumann, D Weissenborn, ...
arXiv preprint arXiv:2205.06230, 2022
642022
Fidelity-Weighted Learning
M Dehghani, A Mehrjou, S Gouws, J Kamps, B Schölkopf
International Conference on Learning Representations (ICLR2018), https …, 2018
632018
Exploring the limits of large scale pre-training
S Abnar, M Dehghani, B Neyshabur, H Sedghi
arXiv preprint arXiv:2110.02095, 2021
622021
Are pre-trained convolutions better than pre-trained transformers?
Y Tay, M Dehghani, J Gupta, D Bahri, V Aribandi, Z Qin, D Metzler
arXiv preprint arXiv:2105.03322, 2021
552021
Words are Malleable: Computing Semantic Shifts in Political and Media Discourse
H Azarbonyad, M Dehghani, K Beelen, A Arkut, M Marx, K Jaap
International Conference on Information and Knowledge Management (CIKM'17), 2017
552017
Transformer memory as a differentiable search index
Y Tay, V Tran, M Dehghani, J Ni, D Bahri, H Mehta, Z Qin, K Hui, Z Zhao, ...
Advances in Neural Information Processing Systems 35, 21831-21843, 2022
542022
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20