Spatial and Temporal Uncertainty based Context Aware Adaptive Video Compression Using Deep learning

B. Shravan Kumar, Dr. V. Usha Shree, Dr. P. Chandrasekhar Reddy

Abstract


Nowadays, the increase in the power of the processors becomes faster than that of the storage capacities, thus surpassing the speed of the bandwidth of the networks, indeed it requires a lot of changes in the telecommunications infrastructures. To overcome this problem, it is recommended toward lessen size of data through using the power of the processors rather than increasing the storage and transmission capacities of the data. It is thus important that an effective video compression system is established and high quality frames are produced at certain bandwidth budgets. The proposed method is an adaptive video compression technique using deep learning. During the training phase, visual spatial and temporal uncertainty features are extracted from input video frames. These features are given as input to 1D convolution neural network. 1D CNN is trained to identify context aware visual features. This data is used to perform context aware adaptive video compression. The DCT block based compression is used to perform adaptive quantization on the DCT coefficients. By preserving the rate of compression, this adaptive quantization improves the quality of compressed videos.


Full Text:

PDF




Copyright (c) 2021 B. Shravan Kumar, Dr. V. Usha Shree, Dr. P. Chandrasekhar Reddy

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

All published Articles are Open Access at  https://journals.pen2print.org/index.php/ijr/ 


Paper submission: ijr@pen2print.org