“Neural Architecture Search for Tiny Devices”
It is widely anticipated that inference models based on Deep Neural Networks (DNN) will be actively deployed in many edge platforms. This has promoted research in automated learning of tiny neural architectures through search. Although NAS was proposed in 2016, the NAS research is focused on fast search of DNN architectures that surpass the performance of human-designed ones. Apart from the above primary target of enhancing the NAS process itself, many people use NAS for generating and customizing DNN models, given a target hardware. In recent times this has become very important for embedded Deep Neural Networks that need to meet platform specific constraints and various objectives, such as low latency, low memory footprint and low power consumption. Neural Architecture Search (NAS) can provide both efficient and accurate, customized models for the target architecture. However, the existing frameworks either provide a). mechanisms for fast accurate model generation or b) slow but both accurate and efficient model generation, but not both. Towards This, the current tutorial explains the basic NAS process and the mathematical model behind the search, which makes it easy for the TinyML engineers to tweak existing NAS frameworks in an informed manner.