• Opto-Electronic Engineering
  • Vol. 41, Issue 3, 12 (2014)
LIU Long*, ZHAO Jing, and FAN Boyang
Author Affiliations
  • [in Chinese]
  • show less
    DOI: 10.3969/j.issn.1003-501x.2014.03.003 Cite this Article
    LIU Long, ZHAO Jing, FAN Boyang. A New Computational Method of Visual Motion Attention[J]. Opto-Electronic Engineering, 2014, 41(3): 12 Copy Citation Text show less
    References

    [1] Marr D. Vision [M]. San Francisco: W.H. Freeman, 1982.

    [2] Koch C , Ullman S. Shift in selective visual attention : towards the underlying neural circuitry [J]. Human Neurobiology(S0721-9075), 1985, 4(4): 219-227.

    [3] Treisman A M, Gelade G. Feature-integration theory of Attention [J]. Cognitive Psychology(S0010-0285), 1980, 12: 97-136.

    [4] Treisman A. Features and objects in Visual Processing [J]. Scientific American(S0036-8733), 1986, 253: 114-125.

    [5] Itti L, Koch C, Niebur E A. Model of saliency -based visual attention for rapid scene analysis [J]. IEEE Transaction on PAMI (S0162-8828), 1998, 20(11): 1254-1259.

    [6] Itti L, Koch C. Computational Modeling of Visual Attention [J]. Nature Reviews Neuroscience(S1759-4758), 2001, 2(3): 194-203.

    [7] Ming-Chieh Chi, Chia-Hung Yeh, Mei-Juan Chen. Robust Region-of-Interest Determination Based on User Attention Model Through Visual Rhythm Analysis [J]. IEEE Transaction on Circuits and Systems for Video Technology(S1095-2055), 2009: 1025-1038.

    [8] YU Zhiwen, WONG Hausan. A Rule Based Technique for Extraction of Visual Attention Regions Based on Real-Time Clustering [J]. IEEE Transaction on Multimedia(S1520-9210), 2007, 9(4): 766-784.

    [9] GU Xiaodong, CHEN Zhibo, CHEN Quqing. Refinement of Extracted Visual Attention Areas in Video Sequences [C]//ICASSP, 2010: 966-969.

    [10] Wen-Fu Lee, Tai-Hsiang Huang, Su-Ling Yeh, et al. Learning-Based Prediction of Visual Attention for Video Signals [J]. IEEE Transactions on Image Processing(S1057-7149), 2011, 20(11): 3028-3038.

    [11] WANG Hui, LIU Gang, DANG Yuanyuan. The Target Quick Searching Strategy Based on Visual Attention [C]//International Conference on Computer Science and Electronics Engineering, 2012: 460-462.

    [12] YU Zhiwen, WONG Hausan. A Rule Based Technique for Extraction of Visual Attention Regions Based on Real-Time Clustering [J]. IEEE Transaction on Multimedia(S1520-9210), 2007, 9(4): 766-784.

    [13] Guironnet M, Guyader N, Pellerin D, et al. Spatio-temporal attention model for video content analysis [C]// ICIP, 2005, 3: III - 1156-9.

    [14] MA Yufei, HUA Xiansheng, LU Lie. A Generic Framework of User Attention Model and Its Application in Video Summarization [J]. IEEE Transaction on Multimedia(S1520-9210), 2005, 7(5): 907-919.

    [15] HAN Junwei. Object Segmentation from Consumer Video: A Unified Framework Based on Visual Attention [J]. IEEE Transactions on Consumer Electronics(S0098-3063), 2009, 55(3): 1597-1605.

    [16] CAO Wei, HOU Hui, TONG Jiarong, et al. A High-performance Reconfigurable VLSI Architecture for VBSME in H.264 [J]. IEEE Transactions on Consumer Electronics(S0098-3063), 2008, 54(3): 1338-1345.

    [17] Injong Rhee, Graham R Martin, Packwood Muthukrishnan S. Quadtree-Structured Variable-Size Block-Matching Motion Estimation with Minimal Error [J]. IEEE Transactions Circuits Systems Video Technology(S1051-8215), 2000, 10(2): 42-50.

    CLP Journals

    [1] YAN Wen, GONG Fei, ZHOU Ying, ZHOU Feng, JIN Wei, FU Randi. Satellite Cloud Image Fusion Based on Adaptive PCNN and NSST[J]. Opto-Electronic Engineering, 2016, 43(10): 70

    LIU Long, ZHAO Jing, FAN Boyang. A New Computational Method of Visual Motion Attention[J]. Opto-Electronic Engineering, 2014, 41(3): 12
    Download Citation