Fig. 1. General framework for point cloud learnable binary quantization network model
Fig. 2. Learnable binary quantization network model of PointNet
Fig. 3. Gene-algorithm based binary quantization scale factor recovery
Fig. 4. Optimal search of scale factor based on gene-algorithm optimization
Fig. 5. Curves of feature entropy before and after pooling. (a) n=5; (b) n=20; (c) n=50; (d) n=100
Fig. 6. Minimization of statistically self-adaptive pooling loss. (a) Quantitative network self-regulation; (b) statistical knowledge transfer regulation of full precision network
Fig. 7. Comparison of adjusted feature probability distributions. (a) Feature distribution comparison 1; (b) feature distribution comparison 2; (c) feature distribution comparison 3
Fig. 8. Learnable training process
Fig. 9. Comparison of optimized pooling. (a) Quantization network self-adjustment; (b) full-precision network transfer adjustment
Fig. 10. Training performance comparison. (a) Comparison result 1; (b) comparison result 2; (c) comparison result 3
Fig. 11. Scaling factor searching based on gene-optimized algorithm. (a) Iterative searching process; (b) feature maps produced by binary conv layer
Fig. 12. Comparison of different channel feature maps of different binary convolution layers (sub-figures from left to right are 3 corresponding channels in sequence). (a) Feature maps of different channels of 1st binary convolution layer; (b) feature maps of different channels of 2nd binary convolution layer; (c) feature maps of different channels of 3rd binary convolution layer
Fig. 13. Comparison of feature maps of binary convolution layers at different locations (sub-figures from left to right are 3 convolution layers in sequence). (a) Feature map of location 1; (b) feature map of location 2; (c) feature map of location 3
Fig. 14. Feature maps of pooling. (a) Activation features before pooling; (b) pooling features of non-optimized binary network; (c) pooling features of binary network with pooling optimization; (d) pooling features of binary network with scaling and pooling optimization
Fig. 15. Partial results of part segmentation. (a) Knife; (b) motorbike; (c) lamp
Fig. 16. Partial results of semantic segmentation. (a) Area 1_Conference Room 2; (b) Area 1_Office Room 2; (c) Area 1_Hallway 1
Fig. 17. Overall performance comparisons. (a) Performance comparison 1; (b) performance comparison 2
Fig. 18. Inference time comparisons
Method | Bit width Nw /bit | Bit width Na/bit | Scaling/Shiftingfactor | Floating pointcalculation | Bitwisecalculation |
---|
BNN | 1 | 1 | — | 0 | O1×O2 | XNOR-Net | 1 | 1 | Scaling | O1 | O1×O2 | IRNet | 1 | 1 | Shifting | 0 | O1×O2+O1 | BiPointNet | 1 | 1 | Scaling | O1 | O1×O2 | | | | Pooling shifting | S | 0 | | | | Scaling | O1 | O1×O2 | Proposed model | 1 | 1 | Pooling shifting | S | 0 | | | | Pooling scaling | S | 0 |
|
Table 1. Comparison of binary quantization algorithms
Method | Pooling type | Bit width Nw /bit | Bit width Na /bit | Precision Pc /% |
---|
Full precision | MAX | 32 | 32 | 88.2 | BNN | MAX | 1 | 1 | 26.8 | IRNet | MAX | 1 | 1 | 18.5 | XNOR-Net | MAX | 1 | 1 | 71.8 | BiPointNet | MAX | 1 | 1 | 4.1 | Proposed method (Hist) | MAX | 1 | 1 | 79.9 | Proposed method (KDE) | MAX | 1 | 1 | 80.2 | Proposed method (KNN) | MAX | 1 | 1 | 81.7 |
|
Table 2. Comparison of binary quantization methods without optimized pooling
Method | Pooling type | Bit width Nw /bit | Bit width Na /bit | Precision Pc /% |
---|
Full precision | MAX | 32 | 32 | 88.2 | | MAX | 1 | 1 | 26.8 | BNN | APSS1* | 1 | 1 | 80.2 | | APSS2* | 1 | 1 | 78.1 | | MAX | 1 | 1 | 18.5 | IRNet | APSS1* | 1 | 1 | 82.3 | | APSS2* | 1 | 1 | 80.7 | | MAX | 1 | 1 | 71.8 | XNOR-Net | APSS1* | 1 | 1 | 86.0 | | APSS2* | 1 | 1 | 85.6 | | MAX | 1 | 1 | 4.1 | BiPointNet | APSS1* | 1 | 1 | 81.3 | | APSS2* | 1 | 1 | 82.7 |
|
Table 3. Comparison of binary quantization methods with optimized pooling
Method | Pooling type | Bit width Nw /bit | Bit width Na /bit | Precision Pc /% |
---|
Full precision | MAX | 32 | 32 | 88.2 | BNN | MAX | 1 | 1 | 26.8 | XNOR-Net | MAX | 1 | 1 | 71.8 | IRNet | MAX | 1 | 1 | 18.5 | BiPointNet | EMA | 1 | 1 | 86.1 | Proposed method (Hist) | APSS1* | 1 | 1 | 86.5 | | APSS2* | 1 | 1 | 85.3 | Proposed method (KDE) | APSS1* | 1 | 1 | 87.5 | | APSS2* | 1 | 1 | 87.3 | Proposed method (KNN) | APSS1* | 1 | 1 | 86.6 | | APSS2* | 1 | 1 | 87.4 |
|
Table 4. Comparison of typical binary quantization methods
Method | Aero | Bag | Ca | Car | Chair | Earphone | Guitar | Knife | Lamp | Laptop | Motor-bike | Mug | Pistol | Rocket | Skateboard | Table |
---|
Fullprecision | 83.1 | 89.0 | 95.2 | 78.3 | 90.4 | 78.1 | 93.3 | 92.9 | 81.9 | 97.9 | 70.7 | 95.9 | 81.6 | 57.4 | 74.8 | 81.5 | BiPointNet | 79.6 | 69.6 | 86.3 | 67.5 | 88.6 | 69.8 | 87.5 | 83.3 | 75.0 | 95.3 | 45.1 | 91.6 | 76.8 | 47.9 | 57.5 | 79.6 | Proposedmodel | 80.2 | 64.8 | 87.0 | 66.8 | 87.0 | 77.6 | 89.7 | 84.3 | 76.3 | 96.7 | 50.2 | 92.3 | 79.6 | 50.1 | 66.2 | 80.1 |
|
Table 5. Precision of part segmentation%
Method | OverallmIoU | Overall acc | mIoU/accof area1 | mIoU/accof area2 | mIoU/accof area3 | mIoU/accof area4 | mIoU/accof area5 | mIoU/accof area6 |
---|
Fullprecision | 51.9 | 82.0 | 59.7/85.1 | 34.7/73.6 | 60.9/87.2 | 43.6/80.9 | 43.1/82.0 | 66.2/88.1 | BiPointnet | 43.4 | 76.3 | 50.1/77.9 | 29.7/69.8 | 53.3/81.6 | 36.2/73.3 | 36.5/77.0 | 57.8/82.4 | Proposedmodel | 43.9 | 77.5 | 51.8/78.9 | 27.1/68.3 | 55.1/83.2 | 37.5/75.1 | 36.9/78.8 | 59.1/84.0 | Method | IoU ofceiling | IoU offloor | IoU ofwall | IoU ofbeam | IoU ofcolumn | IoU ofwindow | IoU ofdoor | IoU oftable | IoU ofchair | IoU ofsofa | IoU ofbookcase | IoU ofboard | IoU ofclutter | Fullprecision | 89.7 | 93.7 | 71.0 | 50.2 | 34.0 | 52.9 | 53.4 | 56.7 | 46.6 | 9.5 | 38.5 | 36.4 | 41.3 | BiPointNet | 84.2 | 85.6 | 62.0 | 32.8 | 22.9 | 41.7 | 47.3 | 45.2 | 39.5 | 9.1 | 35.3 | 25.8 | 33.2 | Proposedmodel | 85.0 | 86.3 | 60.3 | 33.7 | 24.2 | 43.4 | 46.5 | 46.6 | 41.1 | 8.7 | 34.2 | 26.5 | 34.7 |
|
Table 6. Semantic segmentation experiment results%
Methods | Bit width Nw /bit | Bit width Na /bit | Precision Pc /% |
---|
| Full precision | 32 | 32 | 88.2 | PointNet | BiPointNet | 1 | 1 | 86.1 | | Proposed method | 1 | 1 | 87.2 | | Full precision | 32 | 32 | 90.7 | PointNet++ | BiPointNet | 1 | 1 | 88.5 | | Proposed method | 1 | 1 | 89.0 | | Full precision | 32 | 32 | 89.7 | PointCNN | BiPointNet | 1 | 1 | 81.5 | | Proposed method | 1 | 1 | 82.8 | | Full precision | 32 | 32 | 90.9 | DGCNN | BiPointNet | 1 | 1 | 75.0 | | Proposed method | 1 | 1 | 83.6 |
|
Table 7. Comparative experiment results for typical network models
Method | Pooling type | FLOP persample /Mbit | Speedup ratioSr | Parameter Pa /Mbit | Compressionratio Cr |
---|
Full precision | MAX | 443.38 | 1 | 3.48 | 1 | | MAX | 8.35 | 53 | 0.15 | 23 | BNN | APSS1* | 10.45 | 42 | 0.15 | 23 | | APSS2* | 12.56 | 35 | 0.15 | 23 | | MAX | 8.94 | 50 | 0.16 | 22 | IRNet | APSS1* | 11.05 | 40 | 0.16 | 22 | | APSS2* | 13.15 | 34 | 0.16 | 22 | | MAX | 9.89 | 45 | 0.62 | 6 | XNOR-Net | APSS1* | 11.99 | 37 | 0.62 | 6 | | APSS2* | 14.09 | 31 | 0.62 | 6 | | EMA | 10.56 | 42 | 0.15 | 23 | BiPointNet | APSS1* | 10.56 | 42 | 0.15 | 23 | | APSS2* | 12.66 | 35 | 0.15 | 23 | | MAX | 8.46 | 52 | 0.15 | 23 | Proposed model | APSS1* | 10.56 | 42 | 0.15 | 23 | | APSS2* | 12.66 | 35 | 0.15 | 23 |
|
Table 8. Complexity comparison results