• Laser & Optoelectronics Progress
  • Vol. 58, Issue 24, 2433002 (2021)
Jihui Huang, Rongfen Zhang, Yuhong Liu*, Zhixu Chen, and Zipeng Wang
Author Affiliations
  • College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou 550025, China
  • show less
    DOI: 10.3788/LOP202158.2433002 Cite this Article Set citation alerts
    Jihui Huang, Rongfen Zhang, Yuhong Liu, Zhixu Chen, Zipeng Wang. Optimized Deep Learning Stereo Matching Algorithm[J]. Laser & Optoelectronics Progress, 2021, 58(24): 2433002 Copy Citation Text show less

    Abstract

    Nowadays, deep learning algorithms used for stereo matching have the problems of complex network structure and high consumption. In order to solve such problems, an end to end stereo matching network structure with only half the parameters of the reference network PSMNet is proposed. In the feature extraction module of the proposed network, the general framework is retained, its redundant convolutional layers are reduced, and meanwhile the spatial attention mechanism and channel attention mechanism are integrated to gather contextual information. In the cost calculation module, the input disparity dimension of the disparity calculation is reduced by increasing the offset, and therefore, the parameter amount and consumption of disparity calculation are greatly reduced. In the disparity calculation, the multi-disparity prediction is performed for the output of the matching cost feature body. And the cross-entropy loss function is added to the L1 loss function, which ensures the matching accuracy when reducing the consumption of the model. The proposed algorithm is tested on the KITTI dataset and SceneFlow dataset. The experimental results show that compared with the benchmark method, the parameter amount of the proposed model is reduced by 58% while the accuracy is increased by 24%.
    Jihui Huang, Rongfen Zhang, Yuhong Liu, Zhixu Chen, Zipeng Wang. Optimized Deep Learning Stereo Matching Algorithm[J]. Laser & Optoelectronics Progress, 2021, 58(24): 2433002
    Download Citation