Follow
Matthew Chang
Title
Cited by
Cited by
Year
Semantic visual navigation by watching youtube videos
M Chang, A Gupta, S Gupta
Advances in Neural Information Processing Systems 33, 4283-4294, 2020
722020
Goat: Go to any thing
M Chang, T Gervet, M Khanna, S Yenamandra, D Shah, SY Min, K Shah, ...
arXiv preprint arXiv:2311.06430, 2023
182023
Learning value functions from undirected state-only experience
M Chang, A Gupta, S Gupta
arXiv preprint arXiv:2204.12458, 2022
62022
Look ma, no hands! agent-environment factorization of egocentric videos
M Chang, A Prakash, S Gupta
Advances in Neural Information Processing Systems 36, 2024
42024
One-shot visual imitation via attributed waypoints and demonstration augmentation
M Chang, S Gupta
2023 IEEE International Conference on Robotics and Automation (ICRA), 5055-5062, 2023
12023
Learning Hand-Held Object Reconstruction from In-The-Wild Videos
A Prakash, M Chang, M Jin, S Gupta
arXiv preprint arXiv:2305.03036, 2023
12023
GOAT-Bench: A Benchmark for Multi-Modal Lifelong Navigation
M Khanna, R Ramrakhya, G Chhablani, S Yenamandra, T Gervet, ...
arXiv preprint arXiv:2404.06609, 2024
2024
Diffusion Meets DAgger: Supercharging Eye-in-hand Imitation Learning
X Zhang, M Chang, P Kumar, S Gupta
arXiv preprint arXiv:2402.17768, 2024
2024
3D Hand Pose Estimation in Egocentric Images in the Wild
A Prakash, R Tu, M Chang, S Gupta
arXiv preprint arXiv:2312.06583, 2023
2023
Hands Free: A wearable in-air gesture recognition system
M Chang
Massachusetts Institute of Technology, 2016
2016
One-shot Visual Imitation via Attributed Waypoints and Demonstration Augmentation-Supplementary Material
M Chang, S Gupta
Semantic Visual Navigation by Watching YouTube Videos-Supplementary Materials
M Chang, A Gupta, S Gupta
The system can't perform the operation now. Try again later.
Articles 1–12