• Journal of Applied Optics
  • Vol. 44, Issue 5, 1030 (2023)
Xiuman LIANG, Jinming AN, Xiaohua CAO, Kai ZENG*..., Fubin WANG and Hefei LIU|Show fewer author(s)
Author Affiliations
  • College of Electrical Engineering, North China University of Science and Technology, Tangshan 063210, China
  • show less
    DOI: 10.5768/JAO202344.0502003 Cite this Article
    Xiuman LIANG, Jinming AN, Xiaohua CAO, Kai ZENG, Fubin WANG, Hefei LIU. Classification of combustion state of sintering flame based on CNN-Transformer dual-stream network[J]. Journal of Applied Optics, 2023, 44(5): 1030 Copy Citation Text show less
    References

    [1] Fubin WANG, Hefei LIU, Jianghong HE et al. Analysis of sintering operation process parameters and construction of sintering behavior model. Sintering Pellet, 45, 29-34(2020).

    [2] Fubin WANG, Hefei LIU, Rui WANG et al. Multi-core Boosting saliency detection of flame images of sintered sections. Journal of Computer Aided Design and Graphics, 33, 1466-1474(2021).

    [3] Jinagyun LI, Zhifang YANG, Junfeng ZHENG et al. Application of deep learning technology in iron and steel industry. Iron and Steel, 56, 43-49(2021).

    [4] A KRIZHEVSKY, I SUTSKEVER, G HINTON. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60, 84-90.(2017).

    [5] K HE, X ZHANG, S REN et al. Deep residual learning for image recognition(2016).

    [6] A G HOWARD, M ZHU, B CHEN et al. MobileNets: efficient convolutional neural networks for mobile vision applications. https://arxiv.org/abs/1704.04861

    [7] A DOSOVITSKIY, L BEYER, A KOLESNIKOV et al. An image is worth 16x16 words: transformers for image recognition at scale. https://www.xueshufan.com/publication/3119786062

    [8] Qianchuang ZHANG, Chenxia GUO, Ruifeng YANG et al. Super resolution reconstruction of optical fiber ring image based on lightweight network. Applied Optics, 43, 913-920(2022).

    [9] M MUHAMMAD, S ABDELRAHMAN, C HISHAM et al. EdgeNeXt: efficiently amalgamated cnn-transformer architecture for mobile vision applications. https://arxiv.org/abs/2206.10589v3

    [10] A ZADEH, M CHEN, S PORIA et al. Tensor fusion network for multimodal sentiment analysis. https://arxiv.org/pdf/1707.07250.pdf

    [11] Z LIU, Y SHEN, V B LAKSHMINARASIMHAN et al. Efficient low-rank multimodal fusion with modality-specific factors. https://arxiv.org/abs/1806.00064

    [12] Z PENG, W HUANG, S GU et al. Conformer: local features coupling global representations for visual recognition. https://arxiv.org/abs/2105.03889

    [13] S WOO, J PARK, J Y LEE et al. Cbam: Convolutional block attention module. https://arxiv.org/abs/1807.06521

    [14] C SZEGEDY, V VANHOUCKE, S IOFFE et al. Rethinking the inception architecture for computer vision, 2818-2826(2016).

    [15] M SANDLER, A HOWARD, M ZHU et al. Mobilenetv2: inverted residuals and linear bottlenecks, 4510-4520(2018).

    [16] J GUO, K HAN, H WU et al. CMT: convolutional neural networks meet vision transformers. https://arxiv.org/abs/2107.06263

    Xiuman LIANG, Jinming AN, Xiaohua CAO, Kai ZENG, Fubin WANG, Hefei LIU. Classification of combustion state of sintering flame based on CNN-Transformer dual-stream network[J]. Journal of Applied Optics, 2023, 44(5): 1030
    Download Citation