Подписаться
Nadezhda Chirkova
Nadezhda Chirkova
Naver Labs Europe
Подтвержден адрес электронной почты в домене naverlabs.com - Главная страница
Название
Процитировано
Процитировано
Год
Empirical Study of Transformers for Source Code
N Chirkova, S Troshin
ESEC/FSE 2021: ACM Joint European Software Engineering Conference and …, 2020
602020
On Power Laws in Deep Ensembles
E Lobacheva, N Chirkova, M Kodryan, D Vetrov
NeurIPS 2020: Advances in Neural Information Processing Systems, 2020 …, 2020
452020
Additive regularization for hierarchical multimodal topic modeling
NA Chirkova, KV Vorontsov
Journal Machine Learning and Data Analysis 2 (2), 187-200, 2016
362016
Probing pretrained models of source code
S Troshin, N Chirkova
BlackboxNLP Workshop as EMNLP 2022, 2022
352022
On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
E Lobacheva, M Kodryan, N Chirkova, A Malinin, DP Vetrov
NeurIPS 2021: Advances in Neural Information Processing Systems 34, 2021
242021
Bayesian sparsification of recurrent neural networks
E Lobacheva, N Chirkova, D Vetrov
ICML Workshop on Learning to Generate Natural Language, 2017
192017
Bayesian compression for natural language processing
N Chirkova, E Lobacheva, D Vetrov
EMNLP 2018: 2018 Conference on Empirical Methods in Natural Language Processing, 2018
162018
A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code
N Chirkova, S Troshin
NAACL 2021: Annual Conference of the North American Chapter of the …, 2020
122020
Parameter-Efficient Finetuning of Transformers for Source Code
S Ayupov, N Chirkova
Workshop on Efficient Natural Language Processing at NeurIPS 2022, 2022
112022
Deep ensembles on a fixed memory budget: One wide network or several thinner ones?
N Chirkova, E Lobacheva, D Vetrov
arXiv preprint arXiv:2005.07292, 2020
92020
Structured Sparsification of Gated Recurrent Neural Networks
E Lobacheva, N Chirkova, A Markovich, D Vetrov
NeurIPS Workshop on Context and Compositionality in Biological and …, 2019
82019
On the Embeddings of Variables in Recurrent Neural Networks for Source Code
N Chirkova
NAACL 2021: 2021 Conference of the North American Chapter of the Association …, 2021
62021
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code
N Chirkova, S Troshin
ICLR 2023, 2023
42023
Bayesian Sparsification of Gated Recurrent Neural Networks
E Lobacheva, N Chirkova, D Vetrov
NeurIPS Workshop on Compact Deep Neural Network Representation with …, 2018
42018
Should you marginalize over possible tokenizations?
N Chirkova, G Kruszewski, J Rozen, M Dymetman
ACL 2023: 61st Annual Meeting of the Association for Computational Linguistics, 2023
32023
Zero-shot cross-lingual transfer in instruction tuning of large language models
N Chirkova, V Nikoulina
arXiv preprint arXiv:2402.14778, 2024
22024
Key ingredients for effective zero-shot cross-lingual knowledge transfer in generative tasks
N Chirkova, V Nikoulina
NAACL 2024, 2024
22024
On the Memorization Properties of Contrastive Learning
I Sadrtdinov, N Chirkova, E Lobacheva
ICML Workshop on Overparameterization: Pitfalls & Opportunities, 2021, 2021
12021
BERGEN: A Benchmarking Library for Retrieval-Augmented Generation
D Rau, H Déjean, N Chirkova, T Formal, S Wang, V Nikoulina, ...
arXiv preprint arXiv:2407.01102, 2024
2024
Retrieval-augmented generation in multilingual settings
N Chirkova, D Rau, H Déjean, T Formal, S Clinchant, V Nikoulina
ACL 2024 Workshop: Towards Knowledgeable Language Models, 2024
2024
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20