Zhao Shuanfeng, Huang Tao, Xu Qian, Geng Longlong. Unsupervised Monocular Depth Estimation for Autonomous Flight of Drones[J]. Laser & Optoelectronics Progress, 2020, 57(2): 21012

Search by keywords or author
- Laser & Optoelectronics Progress
- Vol. 57, Issue 2, 21012 (2020)

Fig. 1. Principle of binocular depth estimation

Fig. 2. Structural diagram of unsupervised monocular depth estimation

Fig. 3. Model of image reconstruction

Fig. 4. Loss function of each part of training process. (a) Structural similarity loss of reconstructed image and original image; (b) absolute value loss of difference between reconstructed image and original image; (c) total image reconstruction loss; (d) loss of disparity smoothness; (e) loss of consistency in left and right disparity maps; (f) total loss of our model

Fig. 5. Platform of drone experiment. (a) Drone; (b) connection of NVIDIA Jeston TX2 and Pixhawk
![Examples of depth map predicted on KITTI dataset. (a) Input image; (b) ground truth depth map; (c) depth map predicted by Ref. [15] ; (d) depth map predicted in Ref. [20]; (e) depth map predicted by our model based on VGG-16; (f) depth map predicted by our model based on ResNet-50](/Images/icon/loading.gif)
Fig. 6. Examples of depth map predicted on KITTI dataset. (a) Input image; (b) ground truth depth map; (c) depth map predicted by Ref. [15] ; (d) depth map predicted in Ref. [20]; (e) depth map predicted by our model based on VGG-16; (f) depth map predicted by our model based on ResNet-50

Fig. 7. Examples of depth map predicted in real outdoor scenes. (a) Input images; (b) ground truth depth maps
|
Table 1. Comparison of experimental results on KITTI dataset
|
Table 2. Comparisonof experimental results on Make3D dataset

Set citation alerts for the article
Please enter your email address