A LAYER-WISE EXTREME NETWORK COMPRESSION FOR SUPER RESOLUTION

A Layer-Wise Extreme Network Compression for Super Resolution

A Layer-Wise Extreme Network Compression for Super Resolution

Blog Article

Deep neural networks (DNNs) for single image super-resolution (SISR) tend to have large model size and high computation complexity to achieve promising restoration performance.Unlike image classification, model compression for SISR has rarely been studied.In this paper, we found out that DNNs for image classification and SISR have often different characteristics in terms of layer importance.That is, contrary to the DNNs for image classification, the performance of SISR networks hardly decrease even if a few layers are eliminated during inference.This is due Greetings Card to the fact that they typically consist of a bunch of hierarchical and complex residual connections.

Based on that key observation, we propose a layer-wise extreme network compression method for SISR.The proposed method consists of: i) reinforcement learning based joint framework for layer-wise quantization and pruning both of which are effectively incorporated into the search space; ii) a progressive preserve ratio scheduling that reflects importance in each layer more effectively, yielding much higher compression efficiency.Our ZMA comprehensive experiments show that the proposed method can effectively be applied to the existing SISR networks, thus extremely reducing the model size up to 97% (i.e., 1 bit per weight on average) with marginal performance degradation compared to the corresponding full-precision models.

Report this page