Using AI to design energy-efficient AI accelerators for the edge: tinyML Talks – Weiwen Jiang

tinyML Talks – recorded December 8, 2020
“Using AI to design energy-efficient AI accelerators for the edge”
Dr. Weiwen Jiang – University of Notre Dame

In this talk Dr. Jiang will present a novel machine learning driven hardware and software co-exploration framework for overcoming the challenge of automating the design of energy-efficient hardware accelerators for neural networks. Different from existing hardware-aware neural architecture search (NAS) which assumes a fixed hardware design and explores the NAS space only, such a framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both accuracy and hardware efficiency metrics. Especially for running machine learning on resource constrained edge devices, such a practice greatly opens up the design freedom. We will see how we can significantly push forward the Pareto frontier between hardware efficiency and model accuracy for better design tradeoffs, and rapid time to market for flexible accelerators designed from the ground.


Leave a Reply

Your email address will not be published. Required fields are marked *

*