Follow
Akhil Arunkumar
Title
Cited by
Cited by
Year
MCM-GPU: Multi-Chip-Module GPUs for Continued Performance Scalability
A Arunkumar, E Bolotin, B Cho, U Milic, E Ebrahimi, O Villa, A Jaleel, ...
(ISCA) Proceedings of the 44th Annual International Symposium on Computer …, 2017
2282017
CAWA: Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads
SY Lee, A Arunkumar, CJ Wu
ACM SIGARCH Computer Architecture News 43 (3S), 515-527, 2015
1132015
Beyond the socket: NUMA-aware GPUs
U Milic, O Villa, E Bolotin, A Arunkumar, E Ebrahimi, A Jaleel, A Ramirez, ...
Proceedings of the 50th Annual IEEE/ACM International Symposium on …, 2017
862017
Understanding the future of energy efficiency in multi-module gpus
A Arunkumar, E Bolotin, D Nellans, CJ Wu
2019 IEEE International Symposium on High Performance Computer Architecture …, 2019
392019
Dora: Optimizing smartphone energy efficiency and web browser performance under interference
D Shingari, A Arunkumar, B Gaudette, S Vrudhula, CJ Wu
2018 IEEE International Symposium on Performance Analysis of Systems and …, 2018
262018
Characterization and throttling-based mitigation of memory interference for heterogeneous smartphones
D Shingari, A Arunkumar, CJ Wu
2015 IEEE International Symposium on Workload Characterization, 22-33, 2015
252015
Latte-cc: Latency tolerance aware adaptive cache compression management for energy efficient gpus
A Arunkumar, SY Lee, V Soundararajan, CJ Wu
2018 IEEE International Symposium on High Performance Computer Architecture …, 2018
222018
ID-cache: instruction and memory divergence based cache management for GPUs
A Arunkumar, SY Lee, CJ Wu
2016 IEEE international symposium on workload characterization (IISWC), 1-10, 2016
152016
E-ECC: Low Power Erasure and Error Correction Schemes for Increasing Reliability of Commodity DRAM Systems
HM Chen, A Arunkumar, CJ Wu, T Mudge, C Chakrabarti
152015
Using low cost erasure and error correction schemes to improve reliability of commodity DRAM systems
HM Chen, S Jeloka, A Arunkumar, D Blaauw, CJ Wu, T Mudge, ...
IEEE Transactions on Computers 65 (12), 3766-3779, 2016
112016
Investigation of ensemble features of self-supervised pretrained models for automatic speech recognition
A Arunkumar, VN Sukhadia, S Umesh
arXiv preprint arXiv:2206.05518, 2022
82022
Joint Encoder-Decoder Self-Supervised Pre-training for ASR.
A Arunkumar, S Umesh
Interspeech, 3418-3422, 2022
62022
ReMAP: Reuse and memory access cost aware eviction policy for last level cache management
A Arunkumar, CJ Wu
2014 IEEE 32nd International Conference on Computer Design (ICCD), 110-117, 2014
62014
Estimating correlation for a real-time measure of connectivity
A Arunkumar, A Panday, B Joshi, A Ravindran, HP Zaveri
2012 Annual International Conference of the IEEE Engineering in Medicine and …, 2012
52012
Snoop filter with stored replacement information, method for same, and system including victim exclusive cache and snoop filter shared replacement policies
EC Quinnell, KC Heuer, T Nakra, A Arunkumar
US Patent 10,360,158, 2019
22019
Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference
M Adnan, A Arunkumar, G Jain, PJ Nair, I Soloveychik, P Kamath
arXiv preprint arXiv:2403.09054, 2024
2024
Immersion cooling server system with ai accelerator apparatuses using in-memory compute chiplet devices for transformer workloads
J Balachandran, A Arunkumar, A Ankit, N Kurella, S Bhoja
US Patent App. 18/511,093, 2024
2024
Server system with ai accelerator apparatuses using in-memory compute chiplet devices for transformer workloads
J Balachandran, A Arunkumar, A Ankit, N Kurella, S Bhoja
US Patent App. 18/486,989, 2024
2024
Re-fetching data for L3 cache data evictions into a last-level cache
T Nakra, J Fleischman, GT Hazari, A Arunkumar, WL Walker, GH Loh, ...
US Patent 11,847,062, 2023
2023
Selective speculative prefetch requests for a last-level cache
T Nakra, A Arunkumar, P Moyer, J Fleischman
US Patent App. 17/564,141, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–20