Подписаться
Ta-Chung Chi
Ta-Chung Chi
Подтвержден адрес электронной почты в домене andrew.cmu.edu - Главная страница
Название
Процитировано
Процитировано
Год
Just ask: An interactive learning framework for vision and language navigation
TC Chi, M Shen, M Eric, S Kim, D Hakkani-Tur
Proceedings of the AAAI conference on artificial intelligence 34 (03), 2459-2466, 2020
712020
KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
TC Chi, TH Fan, PJ Ramadge, AI Rudnicky
NeurIPS, 2022
422022
Dynamic time-aware attention to speaker roles and contexts for spoken language understanding
PC Chen, TC Chi, SY Su, YN Chen
2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2017
412017
Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis
TC Chi, TH Fan, AI Rudnicky, PJ Ramadge
ACL, 2023
33*2023
Speaker role contextual modeling for language understanding and dialogue policy learning
TC Chi, PC Chen, SY Su, YN Chen
IJCNLP, 2017
332017
xSense: Learning sense-separated sparse representations and textual definitions for explainable word sense networks
TY Chang, TC Chi, SC Tsai, YN Chen
arXiv preprint arXiv:1809.03348, 2018
252018
Structured dialogue discourse parsing
TC Chi, AI Rudnicky
SIGDIAL, 2023
182023
PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
YS Wang, TC Chi, R Zhang, Y Yang
ACL, 2023
122023
CLUSE: Cross-lingual unsupervised sense embeddings
TC Chi, YN Chen
EMNLP, 2018
102018
Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
TC Chi, TH Fan, AI Rudnicky, PJ Ramadge
Findings of EMNLP, 2023
72023
Training discrete deep generative models via gapped straight-through estimator
TH Fan, TC Chi, AI Rudnicky, PJ Ramadge
International Conference on Machine Learning, 6059-6073, 2022
72022
Tartan: A two-tiered dialog framework for multi-domain social chitchat
F Chen, TC Chi, S Lyu, J Gong, T Parekh, R Joshi, A Kaushik, A Rudnicky
Alexa prize proceedings, 2020
72020
Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
TC Chi, TH Fan, LW Chen, AI Rudnicky, PJ Ramadge
ACL, 2023
52023
BCWS: Bilingual contextual word similarity
TC Chi, CY Shih, YN Chen
arXiv preprint arXiv:1810.08951, 2018
42018
Advancing Regular Language Reasoning in Linear Recurrent Neural Networks
TH Fan, TC Chi, AI Rudnicky
NAACL, 2023
32023
Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
TC Chi, AI Rudnicky
EMNLP, 2021
32021
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation
TC Chi, TH Fan, AI Rudnicky
Findings of NAACL, 2023
22023
On Task-Adaptive Pretraining for Dialogue Response Selection
TH Lin, TC Chi, A Rumshisky
arXiv preprint arXiv:2210.04073, 2022
12022
Are you doing what I say? On modalities alignment in ALFRED
TR Chiang, YT Yeh, TC Chi, YS Wang
arXiv preprint arXiv:2110.05665, 2021
12021
Automatic Speech Verification Spoofing Detection
S Mo, H Wang, P Ren, TC Chi
arXiv preprint arXiv:2012.08095, 2020
12020
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20