<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Literature Collections | Rocky's Website</title><link>https://rockywu.netlify.app/literatures/</link><atom:link href="https://rockywu.netlify.app/literatures/index.xml" rel="self" type="application/rss+xml"/><description>Literature Collections</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><item><title>CIM-Based Accelerator</title><link>https://rockywu.netlify.app/literatures/example2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://rockywu.netlify.app/literatures/example2/</guid><description>&lt;h3 id="2022">2022&lt;/h3>
&lt;p>&lt;strong>[JSSC'22]&lt;/strong> Scalable and Programmable Neural Network Inference Accelerator Based on In-Memory Computing&lt;/p>
&lt;p>&lt;strong>[JSSC'22]&lt;/strong> DIANA: An End-to-End Hybrid Digital and Analog Neural Network SoC for the Edge&lt;/p>
&lt;h3 id="2023">2023&lt;/h3>
&lt;p>&lt;strong>[VLSI'23]&lt;/strong> A General-Purpose Compute-in-Memory Processor Combining CPU and Deep Learning with Elevated CPU Efficiency and Enhanced Data Locality&lt;/p>
&lt;p>&lt;strong>[JSSC'23]&lt;/strong> An Energy-Efficient Computing-in-Memory NN Processor With Set-Associate Block-wise Sparsity and Ping-Pong Weight Update&lt;/p>
&lt;p>&lt;strong>[ISSCC'23]&lt;/strong> 16.7 A 40-310TOPS/W SRAM-Based All-Digital Up to 4b In-Memory Computing Multi-Tiled NN Accelerator in FD-SOI 18nm for Deep-Learning Edge Applications&lt;/p>
&lt;p>&lt;strong>[ISSCC'23]&lt;/strong> 16.4 TensorCIM: A 28nm 3.7nJ/Gather and 8.3TFLOPS/W FP32 Digital-CIM Tensor Processor for MCM-CIM-Based Beyond-NN Acceleration&lt;/p>
&lt;p>&lt;strong>[ISSCC'23]&lt;/strong> 16.1 MuITCIM: A 28nm 2.24$\mu$J/Token Attention-Token-Bit Hybrid Sparse Digital CIM-Based Accelerator for Multimodal Transformers&lt;/p></description></item><item><title>Design Space Exploration and Modeling for CIM</title><link>https://rockywu.netlify.app/literatures/example/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://rockywu.netlify.app/literatures/example/</guid><description>&lt;h3 id="2020">2020&lt;/h3>
&lt;p>&lt;strong>[ISPASS'20]&lt;/strong> An Architecture-Level Energy and Area Estimator for Processing-In-Memory Accelerator Designs&lt;/p>
&lt;h3 id="2021">2021&lt;/h3>
&lt;p>&lt;strong>[TC'21]&lt;/strong> Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators&lt;/p>
&lt;h3 id="2022">2022&lt;/h3>
&lt;p>&lt;strong>[ICCAD'22]&lt;/strong> Design Space and Memory Technology Co-Exploration for In-Memory Computing Based Machine Learning Accelerators&lt;/p>
&lt;h3 id="2023">2023&lt;/h3>
&lt;p>&lt;strong>[DAC'23]&lt;/strong> XPert: Peripheral Circuit &amp;amp; Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing&lt;/p>
&lt;p>&lt;strong>[arxiv'23]&lt;/strong> NicePIM: Design Space Exploration for Processing-In-Memory DNN Accelerators with 3D-Stacked-DRAM&lt;/p>
&lt;p>&lt;strong>[ICCAD'23]&lt;/strong> Benchmarking and modeling of analog and digital SRAM in-memory computing architectures&lt;/p>
&lt;p>&lt;strong>[TCAD'23]&lt;/strong> MNSIM 2.0: A Behavior-Level Modeling Tool for Processing-In-Memory Architectures&lt;/p>
&lt;p>&lt;strong>[TCAD'23]&lt;/strong> Efficient Processing of MLPerf Mobile Workloads Using Digital Compute-In-Memory Macros&lt;/p></description></item></channel></rss>