• Opto-Electronic Engineering
  • Vol. 38, Issue 9, 142 (2011)
HU Zheng-ping* and PANG Jing-chao
Author Affiliations
  • [in Chinese]
  • show less
    DOI: 10.3969/j.issn.1003-501x.2011.09.027 Cite this Article
    HU Zheng-ping, PANG Jing-chao. Bayesian Framework for Visual Attention Model Based on Transition-sliding Window[J]. Opto-Electronic Engineering, 2011, 38(9): 142 Copy Citation Text show less

    Abstract

    According to the thought that the salient object in an image is often conspicuous, compact and complete, a Bayesian salient object extraction model based on space distribution and local complexity of the transition window is proposed. First of all, the bright saliency value map is obtained by computing the contrast of local area and its multiple scales neighborhood, and then the color saliency value map is computed by using conspicuous, space distribution and locally uniform of color information. Meanwhile, the orientation saliency value map is obtained by multi-scale analysis responses of Gabor filters. The above saliency values are inputted into the single-scale Bayesian framework model based on transition-sliding window. Then the probability of that a pixel’s salient is computed by comparing the saliency values inside the window and outside the transition window. Finally, the saliency map of the input image is obtained by taking the maximum value, so the salient object is located and extracted according to the saliency map. The proposed method is applied to all kinds of images, and the better test results show that the algorithm is feasible and valuable.
    HU Zheng-ping, PANG Jing-chao. Bayesian Framework for Visual Attention Model Based on Transition-sliding Window[J]. Opto-Electronic Engineering, 2011, 38(9): 142
    Download Citation