Abstract: Deep Neural Networks (DNNs) replace the brain’s spike-trains with instantaneous rates that are updated once every time-step. They have proven to be extremely powerful, successfully tackling tasks that were thought to be impossible just a decade ago. The current quest is to deploy them on devices powered by batteries (e.g., mobile phones) or energy-harvesting (e.g., IoT end-points), which require more energy-efficient computing. A promising approach, known as neuromorphic computing, maps DNN’s rate-based abstraction back to the brain’s spike-based abstraction. Since updates are only performed when spikes occur, the computational load is reduced if less than one spike occurs per time-step (per signal). The resulting energy-savings may be negated, however, by overhead incurred in mapping DNNs’ rate abstraction to neuromorphic computing’s spike abstraction (e.g., the network’s size must be increased to preserve performance). I will argue that computing in analog while communicating digitally maximizes spike-based neuromorphic computing’s energy-savings. I will present my group’s progress in designing these mixed-signal neuromorphic chips and in reducing the overhead in mapping DNNs onto them.