This neuromorphic architecture model uses memristive nanofibers to simulate neural networks. Artificial neural networks refer to computer systems inspired by the biological neural networks in the brain, comprising powerful tools in computer science and artificial intelligence. Areas such as speech recognition, object tracking, data analysis, fraud detection, and more commonly use artificial neural networks. Most current neural networks integrate into software running on conventional central processing units (CPUs) and graphic processors. While these networks can simulate millions of neurons, they are highly inefficient regarding connectivity, power expenditure, and neural density. They are also time-consuming. Any significant improvement over current paradigms of neural networks and neuromorphic hardware will have immediate relevance to the global marketplace and may even aid in understanding the human brain.
The field of neuromorphic engineering aims to create specific hardware for artificial neural networks with neurons capable of running in real-time and independently of one another, free of the serial processing constraints of conventional computers. However, modeling synapses between neurons requires multiple transistors to implement, making it the most complicated component of neuromorphic hardware. The invention of the memristor revolutionized the field of neuromorphic engineering. This two-terminal nanoscale device can modify its resistance by passing current in one direction or another through the device, enabling efficient modeling of modifiable synapses between neurons. However, designs using memristors still face several challenges, including poor scaling properties due to the requirement for large amounts of space for the wiring between neurons. Additionally, these architectures comprise neurons fully connecting, compared to sparse connections in biological neural networks. Sparse connectivity in artificial neural networks could enable more efficient scaling while keeping the computational capacity intact.
Researchers at the University of Florida have developed a sparse, scalable neural network using memristive nanofibers. Creating randomized memristive wiring between artificial neurons improves connection efficiency and scalability.
Scalable neuromorphic hardware architecture using core-shell memristive nanofibers for randomly connecting neural nodes
This artificial neural network architecture consists of a randomly aligned network of nanofibers with a conductive core and a memristive shell. The nanofibers are the connections, or artificial synapses, between artificial silicon neurons. The connection between the nanofibers and the neural nodes is random, resulting in a network with sparse connectivity, increasing performance and efficiency. Each nanofiber also includes one or more electrodes, serving as a conductive attachment point between the memristive nanofiber and an input or output terminal of an artificial neuron. The conductive core of the nanofibers enables the transmission of signals between the neurons, while the memristive shell creates adjustable connections between the neurons and the fiber core. Controlling the amount and direction of current passing through the memristors enables regulation of the intensity of the synapses.