Подписаться
Weizhe Hua
Weizhe Hua
Google DeepMind
Подтвержден адрес электронной почты в домене google.com
Название
Процитировано
Процитировано
Год
Reverse engineering convolutional neural networks through side-channel information leaks
W Hua, Z Zhang, GE Suh
Proceedings of the 55th Annual Design Automation Conference, 1-6, 2018
2672018
Channel gating neural networks
W Hua, Y Zhou, CM De Sa, Z Zhang, GE Suh
Advances in Neural Information Processing Systems 32, 2019
1972019
Transformer quality in linear time
W Hua, Z Dai, H Liu, Q Le
International conference on machine learning, 9099-9117, 2022
1552022
Sinan: ML-based and QoS-aware resource management for cloud microservices
Y Zhang, W Hua, Z Zhou, GE Suh, C Delimitrou
Proceedings of the 26th ACM international conference on architectural …, 2021
1502021
Boosting the performance of CNN accelerators with dynamic fine-grained channel gating
W Hua, Y Zhou, C De Sa, Z Zhang, GE Suh
Proceedings of the 52nd Annual IEEE/ACM international symposium on …, 2019
522019
Guardnn: secure accelerator architecture for privacy-preserving deep learning
W Hua, M Umar, Z Zhang, GE Suh
Proceedings of the 59th ACM/IEEE Design Automation Conference, 349-354, 2022
342022
Secure information flow verification with mutable dependent types
A Ferraiuolo, W Hua, AC Myers, GE Suh
Proceedings of the 54th Annual Design Automation Conference 2017, 1-6, 2017
282017
Precision gating: Improving neural network efficiency with dynamic dual-precision activations
Y Zhang, R Zhao, W Hua, N Xu, GE Suh, Z Zhang
arXiv preprint arXiv:2002.07136, 2020
272020
MGX: Near-zero overhead memory protection for data-intensive accelerators
W Hua, M Umar, Z Zhang, GE Suh
Proceedings of the 49th Annual International Symposium on Computer …, 2022
15*2022
Bullettrain: Accelerating robust neural network training via boundary example mining
W Hua, Y Zhang, C Guo, Z Zhang, GE Suh
Advances in Neural Information Processing Systems 34, 18527-18538, 2021
152021
Structured pruning is all you need for pruning cnns at initialization
Y Cai, W Hua, H Chen, GE Suh, C De Sa, Z Zhang
arXiv preprint arXiv:2203.02549, 2022
112022
Analysis and design of delay lines for dynamic voltage scaling applications
RN Tadros, W Hua, M Gibiluka, MT Moreira, NLV Calazans, PA Beerel
2016 22nd IEEE International Symposium on Asynchronous Circuits and Systems …, 2016
112016
Low area, low power, robust, highly sensitive error detecting latch for resilient architectures
W Hua, RN Tadros, PA Beerel
Proceedings of the 2016 International Symposium on Low Power Electronics and …, 2016
102016
A low-power low-area error-detecting latch for resilient architectures in 28-nm FDSOI
RN Tadros, W Hua, MT Moreira, NLV Calazans, PA Beerel
IEEE Transactions on Circuits and Systems II: Express Briefs 63 (9), 858-862, 2016
92016
Reverse-engineering cnn models using side-channel attacks
W Hua, Z Zhang, GE Suh
IEEE Design & Test 39 (4), 15-22, 2022
52022
2 ps resolution, fine‐grained delay element in 28 nm FDSOI
W Hua, RN Tadros, P Beerel
Electronics Letters 51 (23), 1848-1850, 2015
52015
Softvn: Efficient memory protection via software-provided version numbers
M Umar, W Hua, Z Zhang, GE Suh
Proceedings of the 49th Annual International Symposium on Computer …, 2022
32022
Information Flow Control in Machine Learning through Modular Model Architecture
T Tiwari, S Gururangan, C Guo, W Hua, S Kariyappa, U Gupta, W Xiong, ...
arXiv preprint arXiv:2306.03235, 2023
22023
Algorithm-Accelerator Co-design for High-Performance and Secure Deep Learning
W Hua
Cornell University, 2022
2022
Structured Pruning of CNNs at Initialization
Y Cai, W Hua, H Chen, GE Suh, C De Sa, Z Zhang
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20