Author Affiliations
1Hubei Key Laboratory of Broadband Wireless Communication and Sensor Networks, School of Information Engineering, Wuhan University of Technology, Wuhan, Hubei 430070, China2School of Electrical and Information Engineering, Hunan Institute of Technology, Hengyang, Hunan 421002, China3School of Artificial Intelligence, Xidian University, Xi'an, Shaanxi 710071, Chinashow less
Fig. 1. Flow chart of proposed ship classification model
Fig. 2. Structural diagram of typical CNN
Fig. 3. Network structure of 3D CNN
Fig. 4. CAD data and point cloud ship images. (a)(b) Cabin; (c)(d) rowing; (e)(f) sailing; (g)(h) cruise; (i)(j) cargo
Fig. 5. Point cloud data samples in Sydney urban object dataset
Layer | Input size | Filter size | Stride | Output size | Number of parameters |
---|
Conv1 | 32×32×32×1 | 5×5×5×32 | 2 | 14×14×14×32 | 4032 | Conv2 | 14×14×14×32 | 3×3×3×32 | 1 | 12×12×12×32 | 27680 | Conv3 | 12×12×12×32 | 3×3×3×64 | 1 | 10×10×10×64 | 55360 | Max Pooling 1 | 10×10×10×64 | 2×2×2 | 2 | 5×5×5×64 | 0 | FC1 | 5×5×5×64 | - | - | 512 | 4096001 | FC2 | 512 | - | - | 128 | 65537 | FC3-Softmax | 128 | - | - | 5 | 641 |
|
Table 1. Detail parameters of 3D CNN
No. | Class | Number of samplesin training set | Number of samplesin testing set |
---|
1 | Cabin | 231 | 57 | 2 | Rowing | 231 | 57 | 3 | Sailing | 231 | 57 | 4 | Cruise | 231 | 57 | 5 | Cargo | 231 | 57 | Total | 1155 | 285 |
|
Table 2. Numbers of training and testing samples in self-build point cloud image ship dataset
No. | Class | Number of samplesin training set | Number of samplesin testing set |
---|
1 | Cabin | 116 | 28 | 2 | Rowing | 116 | 28 | 3 | Sailing | 116 | 28 | 4 | Cruise | 116 | 28 | 5 | Cargo | 116 | 28 | Total | 580 | 140 |
|
Table 3. Numbers of training and testing samples in ship dataset of point cloud images without noise
Size | 32×32×32 | 48×48×48 |
---|
Accuracy /% | 97.14 | 95.71 |
|
Table 4. Classification accuracy of proposed 3D CNN model under each size of voxel grid
Method | Accuracy /% | F1-score | Training time |
---|
PFH+BoW+SVM | 96.43 | 0.9669 | 2.95 h | 3D ShapeNets | 90.71 | 0.9056 | 32.63 s | VoxNet | 95.00 | 0.9500 | 13.44 s | Method in Ref. [17] | 95.71 | 0.9566 | 62.25 s | Proposed method | 97.14 | 0.9714 | 15.88 s |
|
Table 5. Classification accuracy, F1-score, and training time of each method on ship dataset of point cloud images without noise
Method | Accuracy /% | F1-score | Training time/s |
---|
3D ShapeNets | 90.17 | 0.9006 | 64.91 | VoxNet | 93.68 | 0.9358 | 28.01 | Method in Ref. [17] | 94.73 | 0.9471 | 144.47 | Proposed method | 96.14 | 0.9613 | 32.91 |
|
Table 6. Classification accuracy, F1-score, and training time of each method on self-built point cloud image ship dataset
Method | Accuracy /% | F1-score | Training time/s |
---|
GFH+SVM[12] | 73.58 | - | - | VoxNet | 89.51 | 0.8939 | 77.11 | Method in Ref. [15] | 84.00 | - | - | Method in Ref. [17] | 87.37 | 0.8661 | 445.05 | Proposed method | 91.58 | 0.9153 | 90.45 |
|
Table 7. Classification accuracy, F1-score, and training time of each method on Sydney urban object dataset