Spatial And Temporal Uncertainty Based Context Aware Adaptive Video Compression Using Deep Learning

B. Shravan Kumar, Dr. V. Usha Shree

Abstract


Nowadays, the increase in the power of the processors becomes faster than that of the storagecapacities, thus surpassing the speed of the bandwidth of the networks, indeed it requires a lotof changes in the telecommunications infrastructures. To overcome this problem, it isrecommended toward lessen size of data through using the power of the processors rather thanincreasing the storage and transmission capacities of the data. It is thus important that aneffective video compression system is established and high-quality frames are produced atcertain bandwidth budgets. The proposed method is an adaptive video compression techniqueusing deep learning. During the training phase, visual spatial and temporal uncertainty featuresare extracted from input video frames. These features are given as input to 1D convolution neuralnetwork. 1D CNN is trained to identify context aware visual features. This data is used toperform context aware adaptive video compression. The DCT block-based compression is usedto perform adaptive quantization on the DCT coefficients. By preserving the rate of compression,this adaptive quantization improves the quality of compressed videos.






Copyright (c) 2021 B. Shravan Kumar, Dr. V. Usha Shree

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

All published Articles are Open Access at  https://journals.pen2print.org/index.php/ijr/ 


Paper submission: ijr@pen2print.org