Click the category links for more…
Most Recent
Lava
Dynamic Neural Fields (DNF) are neural attractor networks that generate stabilized activity patterns in recurrently connected populations of neurons. These activity patterns form the basis of neural representations, decision making, working memory, and learning….
VisEvent
Xiao Wang et. al.
A large-scale benchmark dataset for reliable object tracking by fusing RGB and event cameras. The dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios, and it is divided into a training and a testing subset, each of which contains 500 and 320 videos, respectively….
State Space Models for Event Cameras
uzh-rpg
Introduce state-space models (SSMs) with learnable timescale parameters to event-based vision. This design adapts to varying frequencies without the need to retrain the network at different frequencies….
NIR – Neuromorphic Intermediate Representation
Institute of Neuromorphic Engineering
NIR is a set of computational primitives, shared across different neuromorphic frameworks and technology stacks. The goal of NIR is to decouple the evolution of neuromorphic hardware and software, ultimately increasing the interoperability between platforms and improving accessibility to neuromorphic technologies….
NIRTorch
Institute of Neuromorphic Engineering
PyTorch helpers for the Neuromorphic Intermediate Representation (NIR). This is a no frills python package to enable torch based libraries to translate to NIR….
YARP integration for event-cameras and other neuromorphic sensors
Robotology
Libraries that handle neuromorphic sensors, such as the dynamic vision sensor, installed on the iCub can be found here, along with algorithms to process the event-based data. Examples include, optical flow, corner detection and ball detection. Demo applications for the iCub robot, and tutorials for running them, include saccading and attention, gaze following a ball, and vergence control….
Institute of Neuromorphic Engineering
NIR is a set of computational primitives, shared across different neuromorphic frameworks and technology stacks. The goal of NIR is to decouple the evolution of neuromorphic hardware and software, ultimately increasing the interoperability between platforms and improving accessibility to neuromorphic technologies….
NIRTorch
Institute of Neuromorphic Engineering
PyTorch helpers for the Neuromorphic Intermediate Representation (NIR). This is a no frills python package to enable torch based libraries to translate to NIR….
YARP integration for event-cameras and other neuromorphic sensors
Robotology
Libraries that handle neuromorphic sensors, such as the dynamic vision sensor, installed on the iCub can be found here, along with algorithms to process the event-based data. Examples include, optical flow, corner detection and ball detection. Demo applications for the iCub robot, and tutorials for running them, include saccading and attention, gaze following a ball, and vergence control….
VisEvent
Xiao Wang et. al.
A large-scale benchmark dataset for reliable object tracking by fusing RGB and event cameras. The dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios, and it is divided into a training and a testing subset, each of which contains 500 and 320 videos, respectively….
NatSGD
NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Recent advancements in multimodal Human-Robot Interaction (HRI) datasets have highlighted the fusion of speech and gesture, expanding robots’ capabilities to absorb explicit and implicit HRI insights. However, existing speech-gesture HRI datasets often focus on…
Tonic
Institute of Neuromorphic Engineering
A tool to facilitate the download, manipulation and loading of event-based/spike-based data. It's like PyTorch Vision but for neuromorphic data!…
Xiao Wang et. al.
A large-scale benchmark dataset for reliable object tracking by fusing RGB and event cameras. The dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios, and it is divided into a training and a testing subset, each of which contains 500 and 320 videos, respectively….
NatSGD
NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Recent advancements in multimodal Human-Robot Interaction (HRI) datasets have highlighted the fusion of speech and gesture, expanding robots’ capabilities to absorb explicit and implicit HRI insights. However, existing speech-gesture HRI datasets often focus on…
Tonic
Institute of Neuromorphic Engineering
A tool to facilitate the download, manipulation and loading of event-based/spike-based data. It's like PyTorch Vision but for neuromorphic data!…
Sinabs
Synsense
A python library for the development and implementation of Spiking Convolutional Neural Networks (SCNNs). The library implements several layers that are spiking equivalents of CNN layers. In addition it provides support to import CNN models implemented in torch conveniently to test their spiking equivalent implementation….
Retina : Low-Power Eye Tracking with Event Camera and Spiking Hardware
Pietro Bonazzi and others
A neuromorphic methodology for eye tracking, harnessing pure event data captured by a Dynamic Vision Sensor (DVS) camera. The framework integrates a directly trained Spiking Neuron Network (SNN) regression model and leverages a state-of-the-art low power edge neuromorphic processor – Speck, collectively aiming to advance the precision and efficiency of eye-tracking systems….
Python Tutorial for Spiking Neural Network
Shikhar Gupta
This is a Python implementation of a hardware efficient spiking neural network. It includes modified learning and prediction rules which could be realised on hardware in an energy efficient way. The aim is to develop a network which could be used for on-chip learning and prediction….
Synsense
A python library for the development and implementation of Spiking Convolutional Neural Networks (SCNNs). The library implements several layers that are spiking equivalents of CNN layers. In addition it provides support to import CNN models implemented in torch conveniently to test their spiking equivalent implementation….
Retina : Low-Power Eye Tracking with Event Camera and Spiking Hardware
Pietro Bonazzi and others
A neuromorphic methodology for eye tracking, harnessing pure event data captured by a Dynamic Vision Sensor (DVS) camera. The framework integrates a directly trained Spiking Neuron Network (SNN) regression model and leverages a state-of-the-art low power edge neuromorphic processor – Speck, collectively aiming to advance the precision and efficiency of eye-tracking systems….
Python Tutorial for Spiking Neural Network
Shikhar Gupta
This is a Python implementation of a hardware efficient spiking neural network. It includes modified learning and prediction rules which could be realised on hardware in an energy efficient way. The aim is to develop a network which could be used for on-chip learning and prediction….
Microsaccade-inspired Event Camera for Robotics
Botao He and others
A new event-based vision system that can acquire more environmental information than traditional event cameras. It can maintain a high-informational output while preserving the advantages of event cameras, such as HDR and high temporal resolution….
Neuromorphic FPGA
TENNLab
A simple and minimalist—but highly scalable—Field-Programmable Gate Array implementation of neuromorphic computing defined by Univeristy of Tennesse Knoxville (UTK) TENNLab research….
Spike-driven Transformer V2
Man Yao et al.
A general Transformer-based SNN architecture, whose goals are: (1) Lower-power, supports the spike-driven paradigm that there is only sparse addition in the network; (2) Versatility, handles various vision tasks; (3) High-performance, shows advantages over CNN-based SNNs; (4) Meta-architecture, inspires future next-generation Transformer-based neuromorphic chip designs….
Botao He and others
A new event-based vision system that can acquire more environmental information than traditional event cameras. It can maintain a high-informational output while preserving the advantages of event cameras, such as HDR and high temporal resolution….
Neuromorphic FPGA
TENNLab
A simple and minimalist—but highly scalable—Field-Programmable Gate Array implementation of neuromorphic computing defined by Univeristy of Tennesse Knoxville (UTK) TENNLab research….
Spike-driven Transformer V2
Man Yao et al.
A general Transformer-based SNN architecture, whose goals are: (1) Lower-power, supports the spike-driven paradigm that there is only sparse addition in the network; (2) Versatility, handles various vision tasks; (3) High-performance, shows advantages over CNN-based SNNs; (4) Meta-architecture, inspires future next-generation Transformer-based neuromorphic chip designs….
Microsaccade-inspired Event Camera for Robotics
Botao He and others
A new event-based vision system that can acquire more environmental information than traditional event cameras. It can maintain a high-informational output while preserving the advantages of event cameras, such as HDR and high temporal resolution….
Spiking Oculomotor Network for Robotic Head Control
Loannis Polykretis and others
This package is the Python and ROS implementation of a spiking neural network on Intel's Loihi neuromorphic processor mimicking the oculomotor system to control a biomimetic robotic head….
Spiking Neural Network for Mapless Navigation
Guangzhi Tang and others
A PyTorch implementation of the Spiking Deep Deterministic Policy Gradient (SDDPG) framework. The hybrid framework trains a spiking neural network (SNN) for energy-efficient mapless navigation on Intel's Loihi neuromorphic processor….
Botao He and others
A new event-based vision system that can acquire more environmental information than traditional event cameras. It can maintain a high-informational output while preserving the advantages of event cameras, such as HDR and high temporal resolution….
Spiking Oculomotor Network for Robotic Head Control
Loannis Polykretis and others
This package is the Python and ROS implementation of a spiking neural network on Intel's Loihi neuromorphic processor mimicking the oculomotor system to control a biomimetic robotic head….
Spiking Neural Network for Mapless Navigation
Guangzhi Tang and others
A PyTorch implementation of the Spiking Deep Deterministic Policy Gradient (SDDPG) framework. The hybrid framework trains a spiking neural network (SNN) for energy-efficient mapless navigation on Intel's Loihi neuromorphic processor….
pyNAVIS: an open-source cross-platform Neuromorphic Auditory VISualizer
Juan Pedro Dominguez-Morales
An open-source cross-platform Python module for analyzing and processing spiking information obtained from neuromorphic auditory sensors. It is primarily focused to be used with a NAS, but can work with any other cochlea sensor….
Xylo Audio HDK
Synsense
A low-power digital SNN inference chip, including a dedicated audio interface. The Xylo-Audio HDK includes an on-board microphone and direct analog audio input for audio processing applications….
Dynamic Audio Sensor
iniLabs
An asynchronous event-based silicon cochlea. The board takes stereo audio inputs; the custom chip asynchronously outputs a stream of address-events representing activity in different frequency ranges. As such it is a silicon model of the cochlea, the auditory inner ear….
Juan Pedro Dominguez-Morales
An open-source cross-platform Python module for analyzing and processing spiking information obtained from neuromorphic auditory sensors. It is primarily focused to be used with a NAS, but can work with any other cochlea sensor….
Xylo Audio HDK
Synsense
A low-power digital SNN inference chip, including a dedicated audio interface. The Xylo-Audio HDK includes an on-board microphone and direct analog audio input for audio processing applications….
Dynamic Audio Sensor
iniLabs
An asynchronous event-based silicon cochlea. The board takes stereo audio inputs; the custom chip asynchronously outputs a stream of address-events representing activity in different frequency ranges. As such it is a silicon model of the cochlea, the auditory inner ear….
Artificial Robot Skin
Institute for Cognitive Systems, Technical University of Munich
A multimodal tactile-sensing module for humanoid robots. By integrating various sensing technologies, this module enables robots to perceive complex tactile information such as force, pressure, temperature, and texture….
Event-Driven Visual-Tactile Sensing and Learning for Robots
Tasbolat Taunyazov
NeuTouch is a neuromorphic fingertip tactile sensor that scales well with the number of taxels. The proposed Visual-Tactile Spiking Neural Network (VT-SNN) also enables fast perception when coupled with event sensors. The proposed visual-tactile system (using the NeuTouch and Prophesee event camera) is evaluated on two robot tasks: container classification and rotational slip detection….
Skin-Inspired Flexible and Stretchable Electrospun Carbon Nanofiber Sensors for Neuromorphic Sensing
Debarun Sengupta and others
An approach entailing carbon nanofiber–polydimethylsiloxane composite-based piezoresistive sensors, coupled with spiking neural networks, to mimic skin-like sensing….
Institute for Cognitive Systems, Technical University of Munich
A multimodal tactile-sensing module for humanoid robots. By integrating various sensing technologies, this module enables robots to perceive complex tactile information such as force, pressure, temperature, and texture….
Event-Driven Visual-Tactile Sensing and Learning for Robots
Tasbolat Taunyazov
NeuTouch is a neuromorphic fingertip tactile sensor that scales well with the number of taxels. The proposed Visual-Tactile Spiking Neural Network (VT-SNN) also enables fast perception when coupled with event sensors. The proposed visual-tactile system (using the NeuTouch and Prophesee event camera) is evaluated on two robot tasks: container classification and rotational slip detection….
Skin-Inspired Flexible and Stretchable Electrospun Carbon Nanofiber Sensors for Neuromorphic Sensing
Debarun Sengupta and others
An approach entailing carbon nanofiber–polydimethylsiloxane composite-based piezoresistive sensors, coupled with spiking neural networks, to mimic skin-like sensing….
Microsaccade-inspired Event Camera for Robotics
Botao He and others
A new event-based vision system that can acquire more environmental information than traditional event cameras. It can maintain a high-informational output while preserving the advantages of event cameras, such as HDR and high temporal resolution….
Speck smart vision sensor
Synsense
Speck is a neuromorphic vision system-on-chip, combining an event-based vision sensor with spiking CNN cores for inference on vision tasks. The Speck HDK includes an interchangeable lens, and full resources for developing neuromorphic vision applications….
Metavision and the IMX636ES Sensor
Sony and Prophesee
Metavision Sensing and Software offer all the resources to work with event-driven cameras. The sensational IMX636ES is the new event-driven sensor created by a collaboration between Sony and PROPHESEE. The Metavision software offers 95 algorithms, 67 code samples and 11 ready-to-use applications to be used with this new generation of cameras….
Botao He and others
A new event-based vision system that can acquire more environmental information than traditional event cameras. It can maintain a high-informational output while preserving the advantages of event cameras, such as HDR and high temporal resolution….
Speck smart vision sensor
Synsense
Speck is a neuromorphic vision system-on-chip, combining an event-based vision sensor with spiking CNN cores for inference on vision tasks. The Speck HDK includes an interchangeable lens, and full resources for developing neuromorphic vision applications….
Metavision and the IMX636ES Sensor
Sony and Prophesee
Metavision Sensing and Software offer all the resources to work with event-driven cameras. The sensational IMX636ES is the new event-driven sensor created by a collaboration between Sony and PROPHESEE. The Metavision software offers 95 algorithms, 67 code samples and 11 ready-to-use applications to be used with this new generation of cameras….
SAMNA
Synsense
Samna is the developer interface to the SynSense toolchain and run-time environment for interacting with all SynSense devices….
ADDER-codec-rs
Andrew C. Freeman
A unified framework for event-based video. Encoder/transcoder/decoder for ADΔER (Address, Decimation, Δt Event Representation) video streams. Includes a transcoder for casting framed video into an ADΔER representation in a manner which preserves the temporal synchronicity of the source, but enables many-frame intensity averaging on a per-pixel basis and extremely high dynamic range….
NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi
Bodo Rueckauer and others
An open-source software platform for compiling and running deep spiking neural networks (SNNs) on the Intel Loihi neuromorphic hardware platform. The paper introduces the NxTF API, which provides a simple interface for defining and training SNNs using common deep learning frameworks, and the NxTF compiler, which translates trained SNN models into executable code for the Loihi chip….
Synsense
Samna is the developer interface to the SynSense toolchain and run-time environment for interacting with all SynSense devices….
ADDER-codec-rs
Andrew C. Freeman
A unified framework for event-based video. Encoder/transcoder/decoder for ADΔER (Address, Decimation, Δt Event Representation) video streams. Includes a transcoder for casting framed video into an ADΔER representation in a manner which preserves the temporal synchronicity of the source, but enables many-frame intensity averaging on a per-pixel basis and extremely high dynamic range….
NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi
Bodo Rueckauer and others
An open-source software platform for compiling and running deep spiking neural networks (SNNs) on the Intel Loihi neuromorphic hardware platform. The paper introduces the NxTF API, which provides a simple interface for defining and training SNNs using common deep learning frameworks, and the NxTF compiler, which translates trained SNN models into executable code for the Loihi chip….
Dynamic Neural Fields
Lava
Dynamic Neural Fields (DNF) are neural attractor networks that generate stabilized activity patterns in recurrently connected populations of neurons. These activity patterns form the basis of neural representations, decision making, working memory, and learning….
State Space Models for Event Cameras
uzh-rpg
Introduce state-space models (SSMs) with learnable timescale parameters to event-based vision. This design adapts to varying frequencies without the need to retrain the network at different frequencies….
Sinabs
Synsense
A python library for the development and implementation of Spiking Convolutional Neural Networks (SCNNs). The library implements several layers that are spiking equivalents of CNN layers. In addition it provides support to import CNN models implemented in torch conveniently to test their spiking equivalent implementation….
Lava
Dynamic Neural Fields (DNF) are neural attractor networks that generate stabilized activity patterns in recurrently connected populations of neurons. These activity patterns form the basis of neural representations, decision making, working memory, and learning….
State Space Models for Event Cameras
uzh-rpg
Introduce state-space models (SSMs) with learnable timescale parameters to event-based vision. This design adapts to varying frequencies without the need to retrain the network at different frequencies….
Sinabs
Synsense
A python library for the development and implementation of Spiking Convolutional Neural Networks (SCNNs). The library implements several layers that are spiking equivalents of CNN layers. In addition it provides support to import CNN models implemented in torch conveniently to test their spiking equivalent implementation….