Подписаться
Kwangjun Ahn
Kwangjun Ahn
PhD Student, MIT
Подтвержден адрес электронной почты в домене mit.edu - Главная страница
Название
Процитировано
Процитировано
Год
Hypergraph spectral clustering in the weighted stochastic block model
K Ahn, K Lee, C Suh
IEEE Journal of Selected Topics in Signal Processing 12 (5), 959-974, 2018
492018
From Nesterov's Estimate Sequence to Riemannian Acceleration
K Ahn, S Sra
Proceedings of Thirty Third Conference on Learning Theory (COLT), PMLR 125 …, 2020
432020
Community recovery in hypergraphs
K Ahn, K Lee, C Suh
IEEE Transactions on Information Theory 65 (10), 6561-6579, 2019
312019
SGD with shuffling: optimal rates without component convexity and large epoch requirements
K Ahn, C Yun, S Sra
Advances in Neural Information Processing Systems (NeurIPS) 33, 2020
30*2020
Binary rating estimation with graph side information
K Ahn, K Lee, H Cha, C Suh
Advances in Neural Information Processing Systems (NeurIPS), 4272-4283, 2018
242018
Optimal dimension dependence of the metropolis-adjusted langevin algorithm
S Chewi, C Lu, K Ahn, X Cheng, T Le Gouic, P Rigollet
Conference on Learning Theory (COLT), 1260-1300, 2021
212021
Efficient constrained sampling via the mirror-Langevin algorithm
K Ahn, S Chewi
Advances in Neural Information Processing Systems (NeurIPS), 2021
172021
Graph Matrices: Norm Bounds and Applications
K Ahn, D Medarametla, A Potechin
arXiv preprint 1604.03423, 2020
17*2020
Understanding the unstable convergence of gradient descent
K Ahn, J Zhang, S Sra
Thirty-ninth International Conference on Machine Learning (ICML 2022) (arXiv …, 2022
92022
Understanding Nesterov's Acceleration via Proximal Point Method
K Ahn, S Sra
Symposium on Simplicity in Algorithms (SOSA), 117-130, 2022
8*2022
Riemannian Perspective on Matrix Factorization
K Ahn, F Suarez
arXiv preprint arXiv:2102.00937, 2021
62021
Information-theoretic limits of subspace clustering
K Ahn, K Lee, C Suh
2017 IEEE International Symposium on Information Theory (ISIT), 2473-2477, 2017
52017
Reproducibility in Optimization: Theoretical Framework and Limits
K Ahn, P Jain, Z Ji, S Kale, P Netrapalli, GI Shamir
NeurIPS 2022 (arXiv preprint arXiv:2202.04598), 2022
22022
Computing the maximum matching width is NP-hard
K Ahn, J Jeong
arXiv preprint arXiv:1710.05117, 2017
22017
Agnostic Learnability of Halfspaces via Logistic Loss
Z Ji, K Ahn, P Awasthi, S Kale, S Karp
Thirty-ninth International Conference on Machine Learning (ICML 2022) (arXiv …, 2022
12022
A simpler strong refutation of random -XOR
K Ahn
International Conference on Randomization and Computation (APPROX/RANDOM) 2020,, 2020
12020
One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares
Y Min, K Ahn, N Azizan
CDC 2022 (arXiv preprint arXiv:2207.13853), 2022
2022
Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently
H Sun, K Ahn, C Thrampoulidis, N Azizan
NeurIPS 2022 (arXiv preprint arXiv:2205.12808), 2022
2022
From Proximal Point Method to Accelerated Methods on Riemannian Manifolds
K Ahn
Massachusetts Institute of Technology (Master's Thesis), 2021
2021
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–19