Follow
Gal Kaplun
Gal Kaplun
PhD Student at Harvard University
Verified email at g.harvard.edu - Homepage
Title
Cited by
Cited by
Year
Deep double descent: Where bigger models and more data hurt
P Nakkiran, G Kaplun, Y Bansal, T Yang, B Barak, I Sutskever
Journal of Statistical Mechanics: Theory and Experiment 2021 (12), 124003, 2021
4462021
Sgd on neural networks learns functions of increasing complexity
P Nakkiran, G Kaplun, D Kalimeris, T Yang, BL Edelman, F Zhang, ...
arXiv preprint arXiv:1905.11604, 2019
121*2019
Robust Influence Maximization for Hyperparametric Models
D Kalimeris, G Kaplun, Y Singer
ICML 2019, 2019
152019
For self-supervised learning, rationality implies generalization, provably
Y Bansal, G Kaplun, B Barak
arXiv preprint arXiv:2010.08508, 2020
112020
Robust neural networks are more interpretable for genomics
PK Koo, S Qian, G Kaplun, V Volf, D Kalimeris
bioRxiv, 657437, 2019
102019
For manifold learning, deep neural networks can be locality sensitive hash functions
N Dikkala, G Kaplun, R Panigrahy
arXiv preprint arXiv:2103.06875, 2021
32021
Deconstructing Distributions: A Pointwise Framework of Learning
G Kaplun, N Ghosh, S Garg, B Barak, P Nakkiran
arXiv preprint arXiv:2202.09931, 2022
12022
Robustness from Simple Classifiers
S Qian, D Kalimeris, G Kaplun, Y Singer
arXiv preprint arXiv:2002.09422, 2020
12020
Knowledge Distillation: Bad Models Can Be Good Role Models
G Kaplun, E Malach, P Nakkiran, S Shalev-Shwartz
arXiv preprint arXiv:2203.14649, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–9