CIM-Based Accelerator

2022

[JSSC'22] Scalable and Programmable Neural Network Inference Accelerator Based on In-Memory Computing

[JSSC'22] DIANA: An End-to-End Hybrid Digital and Analog Neural Network SoC for the Edge

2023

[VLSI'23] A General-Purpose Compute-in-Memory Processor Combining CPU and Deep Learning with Elevated CPU Efficiency and Enhanced Data Locality

[JSSC'23] An Energy-Efficient Computing-in-Memory NN Processor With Set-Associate Block-wise Sparsity and Ping-Pong Weight Update

[ISSCC'23] 16.7 A 40-310TOPS/W SRAM-Based All-Digital Up to 4b In-Memory Computing Multi-Tiled NN Accelerator in FD-SOI 18nm for Deep-Learning Edge Applications

[ISSCC'23] 16.4 TensorCIM: A 28nm 3.7nJ/Gather and 8.3TFLOPS/W FP32 Digital-CIM Tensor Processor for MCM-CIM-Based Beyond-NN Acceleration

[ISSCC'23] 16.1 MuITCIM: A 28nm 2.24$\mu$J/Token Attention-Token-Bit Hybrid Sparse Digital CIM-Based Accelerator for Multimodal Transformers

Wu Yongkun
Wu Yongkun
PhD Student in Electronic and Computer Engineering