[1] H WEI, L J MENG. An accurate stereo matching method based on color segments and edges. Pattern Recognition, 133, 108996(2023).
[2] J K LI, P S WANG, P F XIONG et al. Practical stereo matching via cascaded recurrent network with adaptive correlation, 18, 16263-16272(2022).
[3] C WANG, X WANG, J W ZHANG et al. Uncertainty estimation for stereo matching based on evidential deep learning. Pattern Recognition, 124, 108498(2022).
[4] A KENDALL, H MARTIROSYAN, S DASGUPTA et al. End-to-end learning of geometry and context for deep stereo regression, 66-75(29).
[5] J R CHANG, Y S CHEN. Pyramid stereo matching network, 18, 5410-5418(2018).
[6] F H ZHANG, V PRISACARIU, R G YANG et al. GA-net: guided aggregation net for end-to-end stereo matching, 15, 185-194(2019).
[7] G W XU, J D CHENG, P GUO et al. Attention concatenation volume for accurate and efficient stereo matching, 18, 12981-12990(2022).
[8] L C CHEN, G PAPANDREOU, I KOKKINOS et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell, 40, 834-848(2018).
[9] B XU, Y H XU, X L YANG et al. Bilateral grid learning for stereo matching networks, 20, 12497-12506(2021).
[10] M MENZE, A GEIGER. Object scene flow for autonomous vehicles, 7, 3061-3070(2015).
[11] D P KINGMA, J BA. Adam: A method for stochastic optimization. arXiv preprint(2014).
[12] D SCHARSTEIN, H HIRSCHMÜLLER, Y KITAJIMA et al. High-resolution Stereo Datasets with Subpixel-accurate Ground Truth, 31-42(2014).
[13] S KHAMIS, S FANELLO, C RHEMANN et al. StereoNet: guided hierarchical refinement for real-time edge-aware depth prediction(2018).
[14] J H PANG, W X SUN, J S REN et al. Cascade residual learning: a two-stage convolutional neural network for stereo matching, 22, 887-895(2017).
[15] 杨戈, 廖雨婷. 基于AEDNet的双目立体匹配算法[J]. 华中科技大学学报(自然科学版), 2022, 50(3): 24-28.YANGG, LIAOY T. Algorithm of binocular stereo matching based on AEDNet[J]. Journal of Huazhong University of Science and Technology (Natural Science Edition), 2022, 50(3): 24-28.(in Chinese)