Hardware-Noise-Aware Training for Improved Accuracy of In-Memory-Computing-Based Deep Neural Networks

­Background
Deep neural networks (DNNs) have been very successful in large-scale recognition tasks, but they exhibit large computation and memory requirements. To address the memory bottleneck of digital DNN hardware accelerators, in-memory computing (IMC) designs have been presented to perform analog DNN computations inside the memory. Recent IMC designs have demonstrated high energy-efficiency, but this is achieved at the expense of the noise margin, which can degrade the DNN inference accuracy.


Invention Description
Researchers at Arizona State University have developed a novel hardware-noise-aware DNN training scheme to largely recover the accuracy loss of highly-parallel (e.g. 256 rows activated together) IMC hardware. Performance results were obtained using noise-aware training and inference with several DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. Furthermore, with noise data obtained from five different chips, the method’s effectiveness was also evaluated using individual chips’ noise data versus the ensemble noise from multiple chips. Across these various DNNs and IMC chip measurements, the proposed hardware-noise-aware DNN training consistently improves DNN inference accuracy for actual IMC hardware by up to 17% for the CIFAR-10 dataset.


Potential Applications
•    Deep neural networks
•    In-memory computing


Research Homepage of Professor Jae-sun Seo

Patent Information: