Fig. 1. Flow diagram
Fig. 2. Construction of Unet
Fig. 3. The segmentation results of UNET model for different kind image
Fig. 4. The outline of buildings(blue) and the minimum enclosing rectangle(red) from the segmentation result
Fig. 5. Find the pairs of matched points
Fig. 6. The depth image
Fig. 7. The first comparative experiment of optical image segmentation results between traditional segmentation method and deep learning method
Fig. 8. The second comparative experiment of optical image segmentation results between traditional segmentation method and deep learning method
Fig. 9. The first group of comparative experiments of point cloud segmentation results between traditional segmentation method and deep learning method
Fig. 10. The second group of comparative experiments of point cloud segmentation results between traditional segmentation method and deep learning method
Fig. 11. The first group of test data
Fig. 12. The second group of test data
Fig. 13. The third group of test data
Fig. 14. Point matching result
Fig. 15. Registration result
Fig. 16. Intermediate process diagram of the Method II
Data | Segment method | | | |
---|
The first optical image | Unet | 94.93% | 11.22% | 84.76% | Meanshift clustering segmentation | 85.06% | 14.19% | 74.58% | Maximum entropy threshold segmentation | 85.56% | 8.53% | 79.24% | The second optical image | Unet | 99.65% | 11.82% | 87.91% | Meanshift clustering segmentation | 78.71% | 34.17% | 55.88% | Maximum entropy threshold segmentation | 80.46% | 6.08% | 76.48% | The first point cloud | Unet | 95.67% | 10.74% | 86.80% | Meanshift clustering segmentation | 25.19% | 49.53% | 20.20% | Maximum entropy threshold segmentation | 22.29% | 56.72% | 17.25% | The second point cloud | Unet | 99.60% | 14.62% | 85.08% | Meanshift clustering segmentation | 20.28% | 3.91% | 20.11% | Maximum entropy threshold segmentation | 22.29% | 56.72% | 17.25% |
|
Table 1. Segmentation index data
Data | Min error | Max error | Average error |
---|
The first group of test data | a | 1.41 | 4.47 | 2.85 | b | 2.23 | 5.38 | 3.05 | c | 2.23 | 5.83 | 3.51 | d | 1.00 | 3.61 | 2.14 | e | 1.00 | 3.00 | 2.27 | The second group of test data | a | 2.23 | 6.06 | 4.05 | b | 4.47 | 5.00 | 4.73 | c | 2.00 | 3.60 | 2.88 | d | 1.07 | 4.24 | 2.76 | e | 1.41 | 5.65 | 3.51 | The third group of test data | a | 1.24 | 4.00 | 2.71 | b | 2.00 | 2.82 | 2.47 | c | 2.00 | 5.09 | 3.27 | d | 2.23 | 6.08 | 3.98 | e | 1.00 | 4.00 | 2.39 |
|
Table 2. Registration accuracy(uint:pixel)
Data | Method I | Method II | Method III | Our method |
---|
E | Match points | Correct match | E | Match points | Correct match | E | Match points | Correct match | E |
---|
a-1 | 9.83 | 9 | 2 | / | 4 | 0 | / | 5 | 5 | 3.16 | a-2 | 10.21 | 18 | 2 | / | 7 | 4 | 4.08 | 4 | 4 | 3.93 | a-3 | 8.14 | 20 | 2 | / | 5 | 0 | / | 3 | 3 | 3.75 | a-4 | 10.18 | 25 | 1 | / | 5 | 1 | / | 5 | 5 | 2.78 | a-5 | 15.07 | 18 | 3 | 2.65 | 5 | 2 | / | 4 | 4 | 2.38 | b-1 | 74.79 | 19 | 0 | / | 6 | 0 | / | 4 | 4 | 4.55 | b-2 | 191.55 | 17 | 0 | / | 5 | 2 | / | 5 | 5 | 4.74 | b-3 | 191.28 | 13 | 0 | / | 6 | 0 | / | 5 | 5 | 2.95 | b-4 | 88.64 | 13 | 0 | / | 6 | 1 | / | 5 | 5 | 3.34 | b-5 | 138.46 | 12 | 0 | / | 4 | 1 | / | 5 | 5 | 3.51 | c-1 | 115.18 | 14 | 1 | / | 4 | 1 | / | 4 | 4 | 2.94 | c-2 | 154.13 | 26 | 0 | / | 6 | 2 | / | 4 | 4 | 2.49 | c-3 | 124.37 | 17 | 0 | / | 6 | 0 | / | 3 | 3 | 3.42 | c-4 | 107.55 | 9 | 0 | / | 7 | 4 | 4.41 | 4 | 4 | 4.30 | c-5 | 149.23 | 13 | 1 | / | 6 | 0 | / | 4 | 4 | 2.70 |
|
Table 3. The matching of feature points of method II, method III and our method. And the root mean square error of four registration methods