• Optoelectronics Letters
  • Vol. 13, Issue 6, 448 (2017)
Jie WU1, Si-ya XIE1, Xin-bao SHI1, and Yao-wen CHEN2、*
Author Affiliations
  • 1College of Engineering, Shantou University, Shantou 515063, China
  • 2Key Laboratory of Digital Signal and Image Processing of Guangdong, Shantou University, Shantou 515063, China
  • show less
    DOI: 10.1007/s11801-017-7185-4 Cite this Article
    WU Jie, XIE Si-ya, SHI Xin-bao, CHEN Yao-wen. Global-local feature attention network with reranking strategy for image caption generation[J]. Optoelectronics Letters, 2017, 13(6): 448 Copy Citation Text show less
    References

    [1] JINAG Ying-feng, ZHANG Hua, XUE Yan-bing, ZHOU Mian, XU Guang-ping and GAO Zan, Journal of Optoelectronics·Laser 27, 224 (2016). (in Chinese)

    [2] SUN Jun-ding, LI Hai-hua and JIN Jiao-lin, Journal of Optoelectronics·Laser 28, 441 (2017). (in Chinese)

    [3] Krizhevsky Alex, Sutskever Ilya and Hinton Geoffrey, ImageNet Classification with Deep Convolutional Neural Networks, 25th International Conference on Neural Information Processing Systems, 1097 (2012).

    [4] J. Mao, W. Xu, Y. Yang, J. Wang, Z Huang and A Yuille, Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN), arXiv:1412.6632, 2014.

    [5] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel and Y. Bengio, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, arXiv:1502.03044, 2015.

    [6] O. Vinyals, A. Toshev, S. Bengio and D. Erhan, Show and Tell: A Neural Image Caption Generator, arXiv: 1411.4555, 2015.

    [7] Szegedy Christian, Liu Wei, Jia Yangqing, Sermanet Pierre, Reed Scott, Anguelov Dragomir, Erhan Dumitru, Vanhoucke Vincent and Rabinovich Andrew, Going Deeper with Convolutions, arXiv:1409.4842, 2014.

    [8] K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv:1409.1556, 2014.

    [9] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho and Yoshua Bengio, Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, arXiv:1412.3555, 2014.

    [10] Devlin Jacob, Gupta Saurabh, Girshick Ross, Mitchell Margaret and Zitnick C Lawrence, Exploring Nearest Neighbor Approaches for Image Captioning, arXiv:1505.04467, 2015.

    [11] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar and C. L. Zitnick, Microsoft COCO: Common Objects in Context, European Conference on Computer Vision, 740 (2014).

    [12] Chen X., Fang H., Lin T. Y., Vedantam R., Gupta S., Dollar P. and Zitnick C. L., Microsoft COCO Captions: Data Collection and Evaluation Server, arXiv:1504. 00325, 2015.

    [13] Rashtchian C., Young P., Hodosh M. and Hockenmaier J., Collecting Image Captions Using Amazon’s Mechanical Turk, NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, 139 (2010).

    [14] P. Kishore, R. Salim, W. Todd and W. Zhu, A Method for Automatic Evaluation of Machine Translation, 40th Annual Meeting on Association for Computational Linguistics, 311 (2002).

    [15] R. Vedantam, C. L. Zitnick and D. Parikh, CIDEr: Consensus-Based Image Description Evaluation, IEEE Conference on Computer Vision and Pattern Recognition, 4566 (2015).

    [16] Chin-Yew Lin, Rouge: A package for automatic evaluation of summaries, Proceedings of the Workshop on Text Summarization Branches Out, 2004.

    [17] Satanjeev Banerjee and Alon Lavie, METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments, ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 65 (2005).

    [18] X. Jia, E. Gavves, B. Fernando and T. Tuytelaars, Guiding the Long-Short Term Memory Model for Image Caption Generation, IEEE International Conference on Computer Vision, 2407 (2015).

    WU Jie, XIE Si-ya, SHI Xin-bao, CHEN Yao-wen. Global-local feature attention network with reranking strategy for image caption generation[J]. Optoelectronics Letters, 2017, 13(6): 448
    Download Citation