• Optoelectronics Letters
  • Vol. 13, Issue 6, 448 (2017)
Jie WU1, Si-ya XIE1, Xin-bao SHI1, and Yao-wen CHEN2、*
Author Affiliations
  • 1College of Engineering, Shantou University, Shantou 515063, China
  • 2Key Laboratory of Digital Signal and Image Processing of Guangdong, Shantou University, Shantou 515063, China
  • show less
    DOI: 10.1007/s11801-017-7185-4 Cite this Article
    WU Jie, XIE Si-ya, SHI Xin-bao, CHEN Yao-wen. Global-local feature attention network with reranking strategy for image caption generation[J]. Optoelectronics Letters, 2017, 13(6): 448 Copy Citation Text show less

    Abstract

    In this paper, a novel framework, named as global-local feature attention network with reranking strategy (GLAN-RS), is presented for image captioning task. Rather than only adopting unitary visual information in the classical models, GLAN-RS explores the attention mechanism to capture local convolutional salient image maps. Furthermore, we adopt reranking strategy to adjust the priority of the candidate captions and select the best one. The proposed model is verified using the Microsoft Common Objects in Context (MSCOCO) benchmark dataset across seven standard evaluation metrics. Experimental results show that GLAN-RS significantly outperforms the state-of-the-art approaches, such as multimodal recurrent neural network (MRNN) and Google NIC, which gets an improvement of 20% in terms of BLEU4 score and 13 points in terms of CIDER score.
    WU Jie, XIE Si-ya, SHI Xin-bao, CHEN Yao-wen. Global-local feature attention network with reranking strategy for image caption generation[J]. Optoelectronics Letters, 2017, 13(6): 448
    Download Citation