Event cameras output changes in illumination asynchronously rather than frames at a certain interval. For computer vision tasks, this data can be processed efficiently using spiking neural networks, which promise very-low-power applications. To harness the potential of such models, we have to execute them on specialised neuromorphic hardware. In this talk we look into the data, training and deployment stages that are related to SynSense’s Speck chip and the challenges that arise in each of those.