Подписаться
Samuel R. Bowman
Samuel R. Bowman
NYU and Anthropic
Подтвержден адрес электронной почты в домене nyu.edu - Главная страница
Название
Процитировано
Процитировано
Год
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
A Wang, A Singh, J Michael, F Hill, O Levy, SR Bowman
Proceedings of ICLR, 2019
62592019
A large annotated corpus for learning natural language inference
SR Bowman, G Angeli, C Potts, CD Manning
Proceedings of EMNLP, 2015
44412015
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
A Williams, N Nangia, SR Bowman
Proceedings of NAACL-HLT, 2018
40602018
Generating sentences from a continuous space
SR Bowman, L Vilnis, O Vinyals, AM Dai, R Jozefowicz, S Bengio
Proceedings of CoNLL, 2016
26472016
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
A Wang, Y Pruksachatkun, N Nangia, A Singh, J Michael, F Hill, O Levy, ...
Proceedings of NeurIPS, 2019
18772019
XNLI: Evaluating Cross-lingual Sentence Representations
A Conneau, G Lample, R Rinott, A Williams, SR Bowman, H Schwenk, ...
Proceedings of EMNLP, 2018
11592018
Neural network acceptability judgments
A Warstadt, A Singh, SR Bowman
TACL 7, 625-641, 2019
11482019
Annotation artifacts in natural language inference data
S Gururangan, S Swayamdipta, O Levy, R Schwartz, SR Bowman, ...
Proceedings of NAACL, 2018
11222018
What do you learn from context? Probing for sentence structure in contextualized word representations
I Tenney, P Xia, B Chen, A Wang, A Poliak, RT McCoy, N Kim, ...
Proceedings of ICLR, 2019
8072019
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
6932022
Constitutional AI: Harmlessness from AI feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
5392022
On Measuring Social Biases in Sentence Encoders
C May, A Wang, S Bordia, SR Bowman, R Rudinger
Proceedings of NAACL-HLT, 2019
5192019
Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks
J Phang, T Févry, SR Bowman
arXiv preprint 1811.01088, 2018
4342018
A Fast Unified Model for Parsing and Sentence Understanding
SR Bowman, J Gauthier, A Rastogi, R Gupta, CD Manning, C Potts
Proceedings of ACL, 2016
4082016
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
N Nangia, C Vania, R Bhalerao, SR Bowman
Proceedings of EMNLP, 2020
4042020
Universal Dependencies 2.2
J Nivre, M Abrams, Ž Agić, L Ahrenberg, L Antonsen, MJ Aranzabe, ...
339*2018
BLiMP: A benchmark of linguistic minimal pairs for english
A Warstadt, A Parrish, H Liu, A Mohananey, W Peng, SF Wang, ...
TACL, 2020
3202020
A Gold Standard Dependency Corpus for English
N Silveira, T Dozat, MC de Marneffe, SR Bowman, M Connor, J Bauer, ...
Proceedings of LREC, 2014
3182014
Identifying and Reducing Gender Bias in Word-Level Language Models
S Bordia, SR Bowman
Proceedings of the NAACL-HLT Student Research Workshop, 2019
2862019
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Y Pruksachatkun, J Phang, H Liu, PM Htut, X Zhang, RY Pang, C Vania, ...
Proceedings of ACL, 2020
2662020
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20