Dense and Sparse Reconstruction Error Based Saliency Descriptor

Tejavath Sivaprasad, K. Anjaneyulu, Sk Subhan


In this paper, we propose a visual saliency location calculation from the point of view of recreation blunder. The picture limits are first extricated by means of super pixels as likely signs for foundation layouts, from which thick and scanty appearance models are developed. To start with, we figure thick and scanty recreation mistakes on the foundation formats for each picture district. Second, the reproduction blunders are proliferated in light of the settings acquired from K-implies bunching. Third, the pixel-level remaking blunder is registered by the incorporation of multi-scale recreation mistakes. Both the pixel level thick and scanty reproduction blunders are then weighted by picture conservativeness, which could all the more precisely recognize saliency. What's more, we present a novel Bayesian mix technique to join saliency maps, which is connected to coordinate the two saliency measures in light of thick and scanty recreation mistakes. Trial comes about demonstrate that the proposed calculation performs positively against 24 cutting edge techniques as far as accuracy, review, and F-measure on three open standard remarkable question discovery databases.

Full Text:


Copyright (c) 2018 Edupedia Publications Pvt Ltd

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


All published Articles are Open Access at 

Paper submission: