Graham Neubig
Graham Neubig
Associate Professor of Computer Science, Carnegie Mellon University
Подтвержден адрес электронной почты в домене cs.cmu.edu - Главная страница
Название
Процитировано
Процитировано
Год
A Syntactic Neural Model for General-Purpose Code Generation
P Yin, G Neubig
ACL 2017, 2017
3762017
Dynet: The dynamic neural network toolkit
G Neubig, C Dyer, Y Goldberg, A Matthews, W Ammar, A Anastasopoulos, ...
arXiv preprint arXiv:1701.03980, 2017
374*2017
Pointwise prediction for robust, adaptable Japanese morphological analysis
G Neubig, Y Nakata, S Mori
ACL 2011, 529-533, 2011
2852011
Are Sixteen Heads Really Better than One?
P Michel, O Levy, G Neubig
NeurIPS 2019, 2019
2692019
XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization
J Hu, S Ruder, A Siddhant, G Neubig, O Firat, M Johnson
ICML 2020, 2020
2052020
Learning to generate pseudo-code from source code using statistical machine translation (t)
Y Oda, H Fudaba, G Neubig, H Hata, S Sakti, T Toda, S Nakamura
ASE 2015, 574-584, 2015
1982015
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
Y Qi, DS Sachan, M Felix, SJ Padmanabhan, G Neubig
NAACL 2018, 2018
1972018
Lagging Inference Networks and Posterior Collapse in Variational Autoencoders
J He, D Spokoyny, G Neubig, T Berg-Kirkpatrick
ICLR 2019, 2019
1572019
Incorporating discrete translation lexicons into neural machine translation
P Arthur, G Neubig, S Nakamura
EMNLP 2016, 2016
1572016
Controllable Invariance through Adversarial Feature Learning
Q Xie, Z Dai, Y Du, E Hovy, G Neubig
NIPS 2017, 2017
1532017
Neural machine translation and sequence-to-sequence models: A tutorial
G Neubig
arXiv preprint arXiv:1703.01619, 2017
1532017
Controlling output length in neural encoder-decoders
Y Kikuchi, G Neubig, R Sasano, H Takamura, M Okumura
EMNLP 2016, 2016
1472016
Stress Test Evaluation for Natural Language Inference
A Naik, A Ravichander, N Sadeh, C Rose, G Neubig
COLING 2018, 2018
1462018
Morphological inflection generation using character sequence to sequence learning
M Faruqui, Y Tsvetkov, G Neubig, C Dyer
NAACL 2016, 2016
1292016
What Do Recurrent Neural Network Grammars Learn About Syntax?
A Kuncoro, M Ballesteros, L Kong, C Dyer, G Neubig, NA Smith
EACL 2017, 2017
127*2017
Stack-Pointer Networks for Dependency Parsing
X Ma, Z Hu, J Liu, N Peng, G Neubig, E Hovy
ACL 2018, 2018
1232018
How can we know what language models know?
Z Jiang, FF Xu, J Araki, G Neubig
TACL 8, 423-438, 2020
1152020
Adaptation data selection using neural language models: Experiments in machine translation
K Duh, G Neubig, K Sudoh, H Tsukada
ACL 2013 2, 678-683, 2013
1132013
Learning to translate in real-time with neural machine translation
J Gu, G Neubig, K Cho, VOK Li
EACL 2017, 2016
1122016
Overview of the 6th workshop on Asian translation
T Nakazawa, N Doi, S Higashiyama, C Ding, R Dabre, H Mino, I Goto, ...
WAT 2019, 1-35, 2019
1082019
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20