Enabling mixed-precision quantized neural networks in extreme-edge devices

N Bruschi, A Garofalo, F Conti, G Tagliavini… - Proceedings of the 17th …, 2020 - dl.acm.org
Proceedings of the 17th ACM International Conference on Computing Frontiers, 2020dl.acm.org
The deployment of Quantized Neural Networks (QNN) on advanced microcontrollers
requires optimized software to exploit digital signal processing (DSP) extensions of modern
instruction set architectures (ISA). As such, recent research proposed optimized libraries for
QNNs (from 8-bit to 2-bit) such as CMSIS-NN and PULP-NN. This work presents an
extension to the PULP-NN library targeting the acceleration of mixed-precision Deep Neural
Networks, an emerging paradigm able to significantly shrink the memory footprint of deep …
The deployment of Quantized Neural Networks (QNN) on advanced microcontrollers requires optimized software to exploit digital signal processing (DSP) extensions of modern instruction set architectures (ISA). As such, recent research proposed optimized libraries for QNNs (from 8-bit to 2-bit) such as CMSIS-NN and PULP-NN. This work presents an extension to the PULP-NN library targeting the acceleration of mixed-precision Deep Neural Networks, an emerging paradigm able to significantly shrink the memory footprint of deep neural networks with negligible accuracy loss. The library, composed of 27 kernels, one for each permutation of input feature maps, weights, and output feature maps precision (considering 8-bit, 4-bit and 2-bit), enables efficient inference of QNN on parallel ultra-low-power (PULP) clusters of RISC-V based processors, featuring the RV32IMCXpulpV2 ISA. The proposed solution, benchmarked on an 8-cores GAP-8 PULP cluster, reaches peak performance of 16 MACs/cycle on 8 cores, performing 21× to 25× faster than an STM32H7 (powered by an ARM Cortex M7 processor) with 15× to 21× better energy efficiency.
ACM Digital Library
Showing the best result for this search. See all results