×
Jan 7, 2022 · We propose a new framework called BottleFit, which, in addition to targeted DNN architecture modifications, includes a novel training strategy to achieve high ...
An alternative approach, called split computing, generates com- pressed representations within the model (called “bottlenecks”), to reduce bandwidth usage and ...
We also compare BottleFit with state-of-the-art autoencoders-based approaches, and show that (i) BottleFit reduces power consumption and execution time ...
[IEEE WoWMoM 2022] "BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing". License. MIT license.
BottleFit [21] achieved 55-89 % lower latency in split CNN computing vs. cloud/edge-only execution approaches with Jetson Nano and Raspberry Piv4 devices. By ...
We show that BottleFit decreases power consumption and latency respectively by up to 49% and 89% with respect to (w.r.t.) local computing and by 37% and 55% ...
BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing ; Journal: 2022 IEEE 23rd International ...
Co-authors ; BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing. Y Matsubara, D Callegaro, S ...
BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing · 2 code implementations • 7 Jan 2022 ...
We define supervised compression as learning compressed representations for supervised downstream tasks. It is very challenging to introduce bottlenecks ( ...