Подписаться
Nadezhda Chirkova
Nadezhda Chirkova
Naver Labs Europe
Подтвержден адрес электронной почты в домене naverlabs.com - Главная страница
Название
Процитировано
Процитировано
Год
Empirical Study of Transformers for Source Code
N Chirkova, S Troshin
ESEC/FSE 2021: ACM Joint European Software Engineering Conference and …, 2020
592020
On Power Laws in Deep Ensembles
E Lobacheva, N Chirkova, M Kodryan, D Vetrov
NeurIPS 2020: Advances in Neural Information Processing Systems, 2020 …, 2020
452020
Additive regularization for hierarchical multimodal topic modeling
NA Chirkova, KV Vorontsov
Journal Machine Learning and Data Analysis 2 (2), 187-200, 2016
352016
Probing pretrained models of source code
S Troshin, N Chirkova
BlackboxNLP Workshop as EMNLP 2022, 2022
272022
On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
E Lobacheva, M Kodryan, N Chirkova, A Malinin, DP Vetrov
NeurIPS 2021: Advances in Neural Information Processing Systems 34, 2021
232021
Bayesian sparsification of recurrent neural networks
E Lobacheva, N Chirkova, D Vetrov
ICML Workshop on Learning to Generate Natural Language, 2017
192017
Bayesian compression for natural language processing
N Chirkova, E Lobacheva, D Vetrov
EMNLP 2018: 2018 Conference on Empirical Methods in Natural Language Processing, 2018
162018
A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code
N Chirkova, S Troshin
NAACL 2021: Annual Conference of the North American Chapter of the …, 2020
122020
Parameter-Efficient Finetuning of Transformers for Source Code
S Ayupov, N Chirkova
Workshop on Efficient Natural Language Processing at NeurIPS 2022, 2022
82022
Deep ensembles on a fixed memory budget: One wide network or several thinner ones?
N Chirkova, E Lobacheva, D Vetrov
arXiv preprint arXiv:2005.07292, 2020
82020
Structured Sparsification of Gated Recurrent Neural Networks
E Lobacheva, N Chirkova, A Markovich, D Vetrov
NeurIPS Workshop on Context and Compositionality in Biological and …, 2019
82019
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code
N Chirkova, S Troshin
ICLR 2023, 2023
62023
On the Embeddings of Variables in Recurrent Neural Networks for Source Code
N Chirkova
NAACL 2021: 2021 Conference of the North American Chapter of the Association …, 2021
62021
Bayesian Sparsification of Gated Recurrent Neural Networks
E Lobacheva, N Chirkova, D Vetrov
NeurIPS Workshop on Compact Deep Neural Network Representation with …, 2018
42018
Should you marginalize over possible tokenizations?
N Chirkova, G Kruszewski, J Rozen, M Dymetman
ACL 2023: 61st Annual Meeting of the Association for Computational Linguistics, 2023
22023
On the Memorization Properties of Contrastive Learning
I Sadrtdinov, N Chirkova, E Lobacheva
ICML Workshop on Overparameterization: Pitfalls & Opportunities, 2021, 2021
12021
Zero-shot cross-lingual transfer in instruction tuning of large language models
N Chirkova, V Nikoulina
arXiv preprint arXiv:2402.14778, 2024
2024
Key ingredients for effective zero-shot cross-lingual knowledge transfer in generative tasks
N Chirkova, V Nikoulina
NAACL 2024, 2024
2024
Empirical study of pretrained multilingual language models for zero-shot cross-lingual generation
N Chirkova, S Liang, V Nikoulina
arXiv preprint arXiv:2310.09917, 2023
2023
Electronic apparatus for compressing recurrent neural network and method thereof
EM Lobacheva, NA Chirkova, DP Vetrov
US Patent 11,568,237, 2023
2023
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20