In the era of machine learning, particularly in the context of compute-in-memory (CIM) systems, has been rapidly evolving to meet the escalating demands of high computational efficiency and energy conservation. Existing SRAM CIM approaches have often focused on either fully digital or analog domains, leading to trade-offs between accuracy and energy efficiency. This has limited their potential, especially for floating-point operations, crucial in many mission-critical applications like autonomous driving and defense drones, where even minor inaccuracies can have significant consequences.
Researchers at George Washington University have developed a novel approach to addressing these challenges in FP DNN acceleration. They proposed an efficient SRAM CIM macro to accelerate FP DNNs by harnessing the inherent advantage of FP mantissa addition by ingeniously splitting FP mantissa multiplication into two components - accuracy-oriented mantissa sub-ADD and efficiency-oriented mantissa sub-MUL. This groundbreaking approach not only ensures high accuracy in FP operations, crucial for DNN models, but also dramatically improves energy efficiency. Experimental results demonstrate an astounding 8.7× to 9.3× (7.3×∼8.2×) increase in energy efficiency during inference (training) compared to traditional digital FP baselines. This innovative research shows its potential in the field of machine learning by offering a more efficient, accurate, and energy-conscious pathway for FP DNN acceleration.
Critical Circuit Design
Advantages:
Applications: