Author Affiliations
1School of Automation, Southeast University, Nanjing, Jiangsu 210096, China2Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Southeast University, Nanjing, Jiangsu 210096, China3Shenzhen Research Institute, Southeast University, Shenzhen, Guangdong 518063, Chinashow less
Fig. 1. Algorithm block diagram
Fig. 2. Structure of our proposed network
Fig. 3. Structure of spatial attention mechanism
Fig. 4. Structure of channel attention mechanism
Fig. 5. Results on KITTI2015, from the top to the bottom: the left input, predicted disparity map, actual disparity map, error map
Fig. 6. Results on KITTI2012, from the top to the bottom: the left input, predicted disparity map, actual disparity map, error map
Fig. 7. Results on Sceneflow, from the top to bottom: the left input, actual disparity map, predicted disparity map
Fig. 8. Comparison with other algorithms, from the top to the bottom: the PSM-Net results, the GWC-Net results, our results, our improvement results for the parts framed
Method | Sceneflow | KITTI2012 | KITTI2015 | | |
---|
EPE | 2 pixel | 3 pixel | 4 pixel | ALL DOC |
---|
Out-Noc | Out-All | Out-Noc | Out-All | Out-Noc | Out-All | D1-all | D1-all | MC-CNN (Žbontar et al., 2016) [2] | 3.79 | 3.90 | 5.45 | 2.43 | 3.63 | 1.90 | 2.85 | 3.88 | 3.33 | GC-Net (Cao et al., 2019)[3] | 2.51 | 2.71 | 3.46 | 1.77 | 2.30 | 1.36 | 1.77 | 2.67 | 2.45 | iResNet-i2 (Liang et al., 2018)[20] | 1.40 | 2.69 | 3.34 | 1.71 | 2.16 | 1.30 | 1.63 | 2.44 | 2.19 | PSM-Net (Chang et al., 2018) [5] | 1.09 | 2.44 | 3.01 | 1.49 | 1.89 | 1.12 | 1.42 | 2.32 | 2.14 | SegStereo (Yang et al., 2018)[14] | 1.45 | 2.66 | 3.19 | 1.68 | 2.03 | 1.25 | 1.52 | 2.25 | 2.08 | GA-Net (Zhang et al., 2019)[7] | 0.84 | 2.18 | 2.79 | 1.36 | 1.80 | 1.03 | 1.37 | 1.93 | 1.73 | Ours | 0.95 | 2.33 | 2.98 | 1.42 | 1.76 | 0.92 | 1.21 | 2.22 | 2.07 |
|
Table 1. Comparison with other algorithms