Researchers at GW have developed an effective, improved, and cost-effective training acceleration solution that is novel, for large scale neural networks in the field of Artificial Intelligence. The novel solution can also be implemented in a variety of hardware applications that is associated with training acceleration in machine learning as can be appreciated. The novel solution can include a computer architecture that can effectively calculate a weight matrix update for a neuromorphic network. The novel solution also includes aspects such as reduced time, area, energy, memory needed to operate a neuromorphic hardware system.
The disclosed invention can be implemented as either an apparatus, a device, a system, or a method as can be appreciated. The disclosed invention can include various aspects as follows: (i) a receiving module configured to receive various input vectors associated with backpropagation-based learning in a layer in a deep neural network; (ii) a computing module configured to errors associated with each layers of the deep neural network associated with weight matrices of the input vectors. In an embodiment, the disclosed invention utilizes low-rank approximations of stochastic gradient descent for reduced area needed to operate a neuromorphic hardware system.
Fig. 1 – Aspects of the disclosed invention
Applications:
Advantages: