Á. López García-Arias*, Y. Okoshi*, J. Yu*, J. Suzuki, H. Otsuka, K. Kawamura, T. Van Chu, D. Fujiki, and M. Motomura, "WhiteDwarf: A Holistic Co-Design Approach to Ultra-Compact Neural Inference Acceleration," IEEE Access, 2025. (*: Equal Contribution.) [paper]
H. Otsuka*, D. Chijiwa*, Á. López García-Arias*, Y. Okoshi, K. Kawamura, T. Van Chu, D. Fujiki, S. Takeuchi, M. Motomura, "Partially Frozen Random Networks Contain Compact Strong Lottery Tickets," Transactions on Machine Learning Research (TMLR), 2025. (*: Equal Contribution.) [paper][arXiv]
H. Otsuka, Y. Okoshi, Á. López García-Arias, K. Kawamura, T. Van Chu, D. Fujiki, M. Motomura, "Restricted Random Pruning at Initialization for High Compression Range," Transactions on Machine Learning Research (TMLR), 2024. [paper]
J. Suzuki, J. Yu, M. Yasunaga, Á. López García-Arias, Y. Okoshi, S. Kumazawa, K. Ando, K. Kawamura, T. Van Chu, and M. Motomura, "Pianissimo: A Sub-mW Class DNN Accelerator With Progressively Adjustable Bit-Precision," IEEE Access, 2024. [paper]
Á. López García-Arias, Y. Okoshi, M. Hashimoto, M. Motomura, and J. Yu, “Recurrent Residual Networks Contain Stronger Lottery Tickets,” IEEE Access, 2023. [paper]
Á. López García-Arias, Y. Okoshi, H. Otsuka, D. Chijiwa, Y. Fujiwara, S. Takeuchi, M. Motomura, "The Trichromatic Strong Lottery Ticket Hypothesis: Neural Compression With Three Primary Supermasks," Workshop on Machine Learning and Compression, Conference on Neural Information Processing Systems (NeurIPS), 2024. (Spotlight) [paper][poster]
H. Otsuka*, D. Chijiwa*, Á. López García-Arias*, Y. Okoshi, K. Kawamura, T. Van Chu, D. Fujiki, S. Takeuchi, M. Motomura, "Partially Frozen Random Networks Contain Compact Strong Lottery Tickets," Workshop on Machine Learning and Compression, Conference on Neural Information Processing Systems (NeurIPS), 2024. (*: Equal Contribution.) [paper][poster]
Y. Okoshi*, Á. López García-Arias*, J. Yu*, J. Suzuki, H. Otsuka, T. Van Chu, K. Kawamura, M. Motomura, "WhiteDwarf: 12.24 TFLOPS/W 40 nm Versatile Neural Inference Engine for Ultra-Compact Execution of CNNs and MLPs Through Triple Unstructured Sparsity Exploitation and Triple Model Compression", Asian Solid-State Circuits Conference (ASSCC), 2024. (*: Equal Contribution.) [paper]
A. Shioda, Á. López García-Arias, H. Otsuka, Y. Ichikawa, Y. Okoshi, K. Kawamura, T. Van Chu, D. Fujiki and M. Motomura, "Exploiting N:M Sparsity in Quantized-Folded ResNets: Signed Multicoat Supermasks and Iterative Pruning-Quantization," International Symposium on Computing and Networking (CANDAR), 2024. [paper]
H. Otsuka, Y. Okoshi, Á. López García-Arias, K. Kawamura, T. Van Chu, M. Motomura, “Ramanujan Edge-Popup: Finding Strong Lottery Tickets with Ramanujan Graph Properties for Efficient DNN Inference Execution,” Workshop on Synthesis And System Integration of Mixed Information Technologies (SASIMI), 2024.
J. Yan, H. Ito, Á. López García-Arias, Y. Okoshi, H. Otsuka, K. Kawamura, T. Van Chu, and M. Motomura, “Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets”, Learning on Graphs Conference (LoG), 2023. [paper][arXiv][code]
J. Suzuki, J. Yu, M. Yasunaga, Á. López García-Arias, Y. Okoshi, S. Kumazawa, K. Ando, K. Kawamura, T. Van Chu, and M. Motomura, “Pianissimo: A Sub-mW Class DNN Accelerator with Progressive Bit-by-Bit Datapath Architecture for Adaptive Inference at Edge,” IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023. [paper]
K. Kawamura, J. Yu, D. Okonogi, S. Jimbo, G. Inoue, A. Hyodo, Á. López García-Arias, K. Ando, B. H. Fukushima-Kimura, R. Yasudo, T. V. Chu, M. Motomura, “Amorphica: 4-Replica 512 Fully Connected Spin 336MHz Metamorphic Annealer with Programmable Optimization Strategy and Compressed-Spin-Transfer Multi-Chip Extension,” International Solid-State Circuits Conference (ISSCC), 2023. [paper][press release]
K. Hirose, J. Yu, K. Ando, Y. Okoshi, Á. López García-Arias, J. Suzuki, T. V. Chu, K. Kawamura, and M. Motomura, “Hiddenite: 4K-PE hidden network inference 4D-tensor engine exploiting on-chip model construction achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet,” International Solid-State Circuits Conference (ISSCC), 2022. [paper][press release]
Y. Okoshi, Á. López García-Arias, K. Hirose, K. Ando, K. Kawamura, T. Van Chu, M. Motomura, and J. Yu, “Strong Lottery Ticket Exploration for Hidden Neural Network Accelerator,” IEEE Symposium on Low-Power and High-Speed Chips and Systems (COOL CHIPS), 2022.
Y. Okoshi*, Á. López García-Arias*, K. Hirose, K. Ando, K. Kawamura, T. Van Chu, M. Motomura, and J. Yu*, “Multicoated supermasks enhance hidden networks,” International Conference on Machine Learning (ICML), 2022. (*: Equal Contribution.) [paper][slides][code]
Á. López García-Arias, M. Hashimoto, M. Motomura, and J. Yu, “Hidden-fold networks: Random recurrent residuals using sparse supermasks,” The British Machine Vision Conference (BMVC), 2021. [paper&video][arXiv][code]
Á. López García-Arias, J. Yu, and M. Hashimoto, “Low-cost reservoir computing using cellular automata and random forests,” IEEE International Symposium on Circuits and Systems (ISCAS), 2020. [paper&video]
T. Alonso, M. Ruiz, Á. López García-Arias, G. Sutter, and J. E. López de Vergara, “Submicrosecond latency video compression in a low-end FPGA-based system-on-chip,” International Conference on Field-Programmable Logic and Applications (FPL), 2018. [paper]
M. Hashimoto, Á. López García-Arias, J. Yu, "Bridging the Gap Between Reservoirs and Neural Networks," Photonic Neural Networks with Spatiotemporal Dynamics, Springer, 2023. [book][chapter]
鈴木淳之介, 安永真梨, Á. López García-Arias, 大越康之, 熊澤峻悟, 安藤洸太, 川村一志, T. Van Chu, 本村真人, "Pianissimo: エッジでの適応的な推論を実現するサブmWクラスDNNアクセラレータ," リコンフィギャラブルシステム研究会(RECONF), 2023. [paper]
大塚光莉, 大越康之, Á. López García-Arias, 川村一志, Thiem Van Chu, 劉載勲, 本村真人, "強い宝くじ仮説に基づく超軽量物体検出ニューラルネットワーク," PRMU IPSJ-CVIM, 2023. [paper]
Ph. D. Dissertation: "Improving Deep Learning Efficiency Using Reservoir Computing Inspiration," (Supervisor: Prof. Masato Motomura), Tokyo Institute of Technology, 2024. [link]
Master Thesis: "Closing the Gap Between Deep Learning and Reservoir Computing Using Recurrent Supermasks," (Supervisor: Prof. Masanori Hashimoto), Osaka University, 2021.
Bach. Thesis: "Algoritmo de comprensión de baja latencia en FPGA usando SDSoC," ("Low Latency Compression Algorithm on FPGA Using SDSoC". Supervisor: Prof. Gustavo Daniel Sutter Capristo), Universidad Autónoma de Madrid, 2017. [link]
Á. López García-Arias, "念願の日本上陸," 電子情報通信学会 通信ソサイエティマガジン, 2015. [link]