• Laser & Optoelectronics Progress
  • Vol. 56, Issue 6, 061004 (2019)
Qiangqiang Miao*
Author Affiliations
  • College of Information Engineering, Wuhan University of Technology, Wuhan, Hubei 430070, China
  • show less
    DOI: 10.3788/LOP56.061004 Cite this Article Set citation alerts
    Qiangqiang Miao. Key Frame Extraction of Hysteroscopy Videos Based on Image Quality and Attention[J]. Laser & Optoelectronics Progress, 2019, 56(6): 061004 Copy Citation Text show less
    References

    [1] Gavião W, Scharcanski J, Frahm J M et al. Hysteroscopy video summarization and browsing by estimating the physician's attention on video segments[J]. Medical Image Analysis, 16, 160-176(2012). http://www.ncbi.nlm.nih.gov/pubmed/21920798

    [2] Lopes A P B, da Luz A et al. . VSUMM: A mechanism designed to produce static video summaries and a novel evaluation method[J]. Pattern Recognition Letters, 32, 56-68(2011). http://dl.acm.org/citation.cfm?id=1872651

    [3] dos Santos Belo L, Caetano C A et al. . Summarizing video sequence using a graph-based hierarchical approach[J]. Neurocomputing, 173, 1001-1016(2016). http://www.onacademic.com/detail/journal_1000038261539510_7a38.html

    [4] Chen J, Zou Y X, Wang Y. Wireless capsule endoscopy video summarization: A learning approach based on Siamese neural network and support vector machine. [C]∥International Conference on Pattern Recognition, December 4-8, 2016, Cancún Center, Cancún, México. New York: IEEE, 1303-1308(2016).

    [5] Meng J J, Wang H X, Yuan J S et al. From keyframes to key objects: video summarization by representative object proposal selection[J]. Proceedings of the IEEE, 1039-1048(2016). http://doi.ieeecomputersociety.org/10.1109/CVPR.2016.118

    [6] Li J T, Yao T, Ling Q et al. Detecting shot boundary with sparse coding for video summarization[J]. Neurocomputing, 266, 66-78(2017). http://www.sciencedirect.com/science/article/pii/S0925231217308408

    [7] Ma M Y, Met S, Hon J et al. Nonlinear kernel sparse dictionary selection for video summarization. [C]∥IEEE International Conference on Multimedia and Expo, July 10-14, 2017, Hong Kong, China. New York: IEEE, 637-642(2017).

    [8] Ioannidis A, Chasanis V, Likas A. Weighted multi-view key-frame extraction[J]. Pattern Recognition Letters, 72, 52-61(2016). http://www.sciencedirect.com/science/article/pii/S0167865516000398

    [9] Chen L, Wang Y H. Automatic key frame extraction in continuous videos from construction monitoring by using color, texture, and gradient features[J]. Automation in Construction, 81, 355-368(2017). http://www.sciencedirect.com/science/article/pii/S0926580517303187

    [10] Hamza R, Muhammad K, Lü Z et al. Secure video summarization framework for personalized wireless capsule endoscopy[J]. Pervasive and Mobile Computing, 41, 436-450(2017). http://www.sciencedirect.com/science/article/pii/S1574119217301621

    [11] Ejaz N, Mehmood I, Baik S W. MRT letter: Visual attention driven framework for hysteroscopy video abstraction[J]. Microscopy Research and Technique, 76, 559-563(2013). http://onlinelibrary.wiley.com/doi/10.1002/jemt.22205/full

    [12] Muhammad K, Ahmad J, Sajjad M et al. Visual saliency models for summarization of diagnostic hysteroscopy videos in healthcare systems[J]. SpringerPlus, 5, 1495(2016). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5013008/

    [13] Muhammad K, Sajjad M, Lee M Y et al. Efficient visual attention driven framework for key frames extraction from hysteroscopy videos[J]. Biomedical Signal Processing and Control, 33, 161-168(2017). http://www.sciencedirect.com/science/article/pii/S1746809416301999

    [14] Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision. [C]∥International Joint Conference on Artificial Intelligence, August 24-28, 1981, Vancouver, British Columbia. [S. l. : s. n.], 674-679(1981).

    [15] Bay H. Tuytelaars T, van Gool L. SURF: Speeded up robust features. [C]∥European Conference on Computer Vision. Berlin, Heidelberg: Springer, 404-417(2006).

    [16] Wang M, Li Z Y, Wang C et al. Key frame extraction algorithm of sign language based on compressed sensing and SURF features[J]. Laser & Optoelectronics Progress, 55, 051013(2018).

    [17] Han T Q, Zhao Y D, Liu S L et al. Spatially constrained SURF feature point matching for UAV images[J]. Journal of Image and Graphics, 18, 669-676(2013).

    [18] Ojala T, Pietikäinen M, Harwood D. A comparative study of texture measures with classification based on featured distributions[J]. Pattern Recognition, 29, 51-59(1996). http://www.sciencedirect.com/science/article/pii/0031320395000674

    [19] Yang H X, Chen Y, Zhang F et al. Face recognition based on improved gradient local binary pattern[J]. Laser & Optoelectronics Progress, 55, 061004(2018).

    [20] Surakarin W, Chongstitvatana P. Classification of clothing with weighted SURF and local binary patterns. [C]∥International Computer Science and Engineering Conference, Nov. 23-26, 2015, Chiang Mai, Thailand. New York: IEEE, 1-4(2015).

    [21] Luo T J, Liu B H. Fast SURF key-points image registration algorithm by fusion features[J]. Journal of Image and Graphics, 20, 95-103(2015).

    [22] Geusebroek J M, van den Boomgaard R, Smeulders A W M et al. . Color invariance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 1338-1350(2001).

    [23] Heikkilä M, Pietikäinen M, Schmid C. Description of interest regions with center-symmetric local binary patterns[M]. Computer Vision, Graphics and Image Processing. Berlin, Heidelberg: Springer, 58-69(2006).

    [24] Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 24, 381-395(1981). http://dl.acm.org/citation.cfm?id=358692

    [25] Jin J J, Lu W L, Guo X T et al. Position registration method of simultaneous phase-shifting interferograms based on SURF and RANSAC algorithms[J]. Acta Optica Sinica, 37, 1012002(2017).

    [26] Kristan M, Perš J, Perše M et al. A Bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform[J]. Pattern Recognition Letters, 27, 1431-1439(2006). http://dl.acm.org/citation.cfm?id=1161718

    [27] Wang Z M. Review of no-reference image quality assessment[J]. Acta Automatica Sinica, 41, 1062-1079(2015).

    Qiangqiang Miao. Key Frame Extraction of Hysteroscopy Videos Based on Image Quality and Attention[J]. Laser & Optoelectronics Progress, 2019, 56(6): 061004
    Download Citation