Artificial neural networks have countless applications with their ability for adaptive learning, recognition, and classification. When implemented in software, the artificial neural network algorithms often use a double-precision format. This can result in large area and power requirements for the hardware. Other approaches can be used to determine what precision will be best for different functional stages in a network, but these are usually static decisions that cannot change based on a given operational situation. Furthermore, efficient hardware implementation of such algorithms is challenging due to intensive computation and power consumption. Therefore, there is a need for systems to be able to dynamically change precision levels in order to alleviate the computation/memory burden and reduce power consumption in artificial neural network systems.
Researchers at Arizona State University have created a method for dynamic adaptation of the precision level in on-chip learning applications. This allows for the switch from low precision to high precision to be made in real-time situations. The system operates on low precision and uses minimal power until an anomaly occurs. Once an anomaly or abnormal behavior is observed, high precision is activated and applied to the situation. While learning itself may rely on high precision, many inference situations can operate at a lower precision level until an anomaly or unexpected behavior occurs, at which point a higher level of precision can be utilized. By modifying hardware to use static and dynamic precision adaptation, this innovation substantially reduces power consumption and increases the efficiency of learning and classification in artificial neural networks.
Potential Applications
Benefits and Advantages
For more information about the inventor(s) and their research, please see
Dr. Jae-sun Seo's directory webpage