Without a doubt, we are still many orders of magnitude away from reaching the incredible efficiency, speed, and intelligence found in natural brains with our modern computing architectures. Moreover, our traditional computing systems are currently hitting hard limits, such as the memory wall and the end of Dennard scaling (i.e., the performance per watt increase has slowed significantly). For these reasons, many scientists worldwide are researching new computing architectures inspired by the brain. In this regard, neuromorphic computing is among the most promising approaches for achieving energy-efficient hardware systems for real-time signal processing; it can potentially enable several edge artificial intelligence tasks. In this approach, brain computation is mimicked at the circuit level, employing event-driven and massively parallel spiking neural networks directly implemented in hardware.
During this talk, Federico presents the computing paradigm of spiking neural networks and illustrates practical training algorithms and a family of computing architectures based on ultra-low-power and massively parallel implementations of neurons and synapse circuits. Finally, he showcases a few prototype devices that meet extreme-edge applications’ strict energy and cost reduction constraints on the Internet of Things and biomedical signal processing applications.