• Laser & Optoelectronics Progress
  • Vol. 59, Issue 10, 1015011 (2022)
Aili Wang1, Meihong Liu1, Dong Xue1, Haibin Wu1、*, Lanfei Zhao1, and Iwahori Yuji2
Author Affiliations
  • 1Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, College of measurement and control technology and communication Engineering, Harbin University of Science and Technology, Harbin 150080, Heilongjiang , China
  • 2Department of Computer Science, Chubu University, Aichi 487-8501, Japan
  • show less
    DOI: 10.3788/LOP202259.1015011 Cite this Article Set citation alerts
    Aili Wang, Meihong Liu, Dong Xue, Haibin Wu, Lanfei Zhao, Iwahori Yuji. Hyperspectral Image Classification Combined Dynamic Convolution with Triplet Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(10): 1015011 Copy Citation Text show less
    Flow chart of hyperspectral image classification algorithm combined dynamic convolution with triple attention mechanism
    Fig. 1. Flow chart of hyperspectral image classification algorithm combined dynamic convolution with triple attention mechanism
    Residual unit structure diagram of ResNet
    Fig. 2. Residual unit structure diagram of ResNet
    Dynamic perceptron
    Fig. 3. Dynamic perceptron
    Dynamic convolution
    Fig. 4. Dynamic convolution
    TA schematic diagram
    Fig. 5. TA schematic diagram
    Architecture diagram of TA
    Fig. 6. Architecture diagram of TA
    False color map and ground truth map of each dataset. (a) Pavia University; (b) Kennedy Space Center; (c) Salinas
    Fig. 7. False color map and ground truth map of each dataset. (a) Pavia University; (b) Kennedy Space Center; (c) Salinas
    Comparative analysis of overall classification accuracy of different classification algorithms
    Fig. 8. Comparative analysis of overall classification accuracy of different classification algorithms
    Classification results of Pavia University dataset. (a) Ground truth; (b) RBF-SVM; (c) EMP-SVM; (d) DCNN; (e) GAN; (f) ResNet; (g) PyResNet; (h) DTAResNet
    Fig. 9. Classification results of Pavia University dataset. (a) Ground truth; (b) RBF-SVM; (c) EMP-SVM; (d) DCNN; (e) GAN; (f) ResNet; (g) PyResNet; (h) DTAResNet
    Classification results of Kennedy Space Center dataset. (a) Ground truth; (b) RBF-SVM; (c) EMP-SVM; (d) DCNN; (e) GAN; (f) ResNet; (g) PyResNet; (h) DTAResNet
    Fig. 10. Classification results of Kennedy Space Center dataset. (a) Ground truth; (b) RBF-SVM; (c) EMP-SVM; (d) DCNN; (e) GAN; (f) ResNet; (g) PyResNet; (h) DTAResNet
    Classification results of Salinas dataset. (a) Ground truth; (b) RBF-SVM; (c) EMP-SVM; (d) DCNN; (e) GAN; (f) ResNet; (g) PyResNet; (h) DTAResNet
    Fig. 11. Classification results of Salinas dataset. (a) Ground truth; (b) RBF-SVM; (c) EMP-SVM; (d) DCNN; (e) GAN; (f) ResNet; (g) PyResNet; (h) DTAResNet
    ParameterDataset
    Pavia UniversityKennedy Space CenterSalinas
    SensorROSISAVIRISAVIRIS
    Size /(pixel×pixel)610×340512×614512×217
    Resolution /m1.3183.7
    Spectral band103176204
    Land-cover91316
    Total sample pixel42776521154129
    Table 1. Experimental dataset parameters
    Layer nameOutput sizeResNet-34 parameter setting
    Conv15×57×7,64,stride 2
    Conv2_x3×33×3 max pool,stride 2
    3×3,643×3,64×3
    Conv3_x2×23×3,1283×3,128×4
    Conv4_x1×13×3,2563×3,256×6
    Conv5_x1×13×3,5123×3,512×3
    Table 2. Parameter settings of ResNet-34
    ClassRBF-SVMEMP-SVMDCNNGANResNetPyResNetDTAResNet
    Asphalt88.51±2.4489.06±1.3390.59±0.6294.97±2.0995.59±3.4395.55±0.2596.35±0.64
    Meadows88.86±3.5288.12±0.2389.26±1.0597.62±1.9697.10±2.0698.91±0.7199.16±0.32
    Gravel78.51±2.9378.65±3.0578.89±2.6589.53±3.6287.53±5.2793.81±4.6091.09±1.07
    Trees86.17±1.2888.95±0.5389.05±1.4596.72±0.8599.03±0.3399.35±0.2899.57±0.18
    Metal94.11±2.3693.23±1.2994.55±0.6797.94±0.9998.56±1.3699.67±0.1799.81±0.13
    Bare Soil90.11±0.6290.13±0.5490.23±1.2396.34±1.4398.35±1.0698.39±0.1899.05±0.62
    Bitumen82.13±2.5281.66±3.3183.69±2.8298.73±1.1299.29±0.5192.31±0.8899.41±0.39
    Bricks82.38±2.5783.05±1.6183.57±2.9195.01±1.0594.61±0.5089.44±4.6689.96±3.07
    Shadows93.29±0.5595.26±0.5694.68±0.4697.39±0.9799.39±0.5898.96±0.5698.42±1.45
    OA /%90.65±1.3291.07±0.8591.32±1.2293.88±2.2896.49±1.7897.05±0.4597.49±0.24
    AA /%87.11±2.0987.57±1.3888.28±1.5495.82±1.5696.61±1.6896.27±1.3696.98±1.57
    100K88.92±1.4688.72±1.4489.43±1.0992.93±1.9495.31±2.4196.22±0.1196.67±0.32
    Table 3. Classification results of Pavia University dataset
    ClassRBF-SVMEMP-SVMDCNNGANResNetPyResNetDTAResNet
    Scrub92.11±2.3293.42±1.3892.48±2.2195.29±0.4596.00±1.7596.24±0.4396.63±2.65
    Willow82.25±1.5283.66±2.6583.96±2.8280.83±2.4292.82±2.8093.24±0.5497.68±0.49
    Palm85.37±3.6485.69±3.2173.52±4.9889.43±1.9688.67±2.9187.62±1.7289.98±0.76
    Pine60.08±4.3362.43±5.6360.22±8.5283.02±2.4376.80±3.4686.01±1.9186.02±0.64
    Broadleaf62.56±5.9864.42±4.6561.09±5.9096.15±1.5378.50±1.6377.55±2.6776.15±1.53
    Hardwood65.38±3.4567.65±5.1064.24±5.2495.82±1.9179.43±2.9386.15±1.2496.80±0.30
    Swap63.52±5.6665.56±6.3665.95±7.2495.16±0.8575.88±2.9084.70±1.6882.68±1.85
    Graminoid71.52±3.8473.28±2.9873.60±3.2275.78±2.1096.10±1.7596.17±0.3996.52±0.13
    Spartina81.56±4.5585.32±3.5586.94±2.8495.23±2.2493.93±3.3094.28±0.5496.82±2.24
    Cattail90.78±1.8493.25±1.2293.52±0.9898.98±0.3896.77±1.3499.30±0.0599.48±0.38
    Salt93.65±1.4695.38±2.0195.91±0.5596.06±0.9399.51±0.4899.54±0.0699.98±0.01
    Mud90.35±2.1991.01±2.5889.39±1.2096.37±1.3297.09±0.9596.30±0.4792.44±1.91
    Water99.26±0.2499.31±0.3299.84±0.0499.09±0.1499.65±0.0599.28±0.1899.85±0.33
    OA /%80.65±3.0881.97±3.2781.04±2.9093.56±0.9889.93±2.5494.01±1.0894.93±1.83
    AA /%79.87±3.1681.57±3.2080.05±3.5292.09±1.4490.11±2.0292.03±0.9193.16±1.00
    100K79.93±3.4580.78±2.9679.67±3.7892.85±1.5688.86±4.9693.16±2.0494.35±1.98
    Table 4. Classification results of Kennedy Space Center dataset
    ClassRBF-SVMEMP-SVMDCNNGANResNetPyResNetDTAResNet
    Brocoli_green_weeds_183.42±1.5696.16±0.2897.75±0.2285.71±1.2399.25±0.0599.90±0.0699.95±0.02
    Brocoli_green_weeds_292.19±0.8599.27±0.0297.19±0.1487.50±1.0599.35±0.2099.95±0.0399.99±0.01
    Fallow90.66±1.9480.45±1.7878.39±0.1280.46±2.1399.35±0.4899.03±0.0399.97±0.02
    Fallow_rough_plow95.33±1.9298.34±0.3599.07±0.0899.90±0.0197.49±1.8099.24±0.0598.93±1.38
    Fallow_smooth86.52±2.6494.56±1.3298.84±0.2390.35±1.0699.32±0.4399.61±0.0199.71±0.23
    Stubble97.62±0.2599.45±0.0999.86±0.5599.34±0.3399.46±0.0299.76±0.0699.98±0.01
    Celery95.22±0.7497.36±0.6599.09±0.2999.90±0.0399.38±0.0199.96±0.0199.99±0.01
    Grapes_untrained84.35±2.6184.38±0.4694.96±1.2390.95±1.0896.39±0.4795.45±1.2896.61±1.23
    Soil_vinyard_develop98.45±0.4598.74±1.0699.69±0.0299.92±0.0399.47±0.0299.88±0.0299.97±0.02
    Corn_senesced_green_weeds80.55±4.6991.98±0.9199.25±0.1598.23±0.6599.70±0.0599.83±0.0199.72±0.15
    Lettuce_romaine_4wk85.55±3.1390.73±3.2292.19±1.6897.56±0.8498.71±1.2599.15±0.0999.23±1.08
    Lettuce_romaine_5wk96.04±0.6999.79±0.1099.89±0.1999.04±0.4299.52±0.2499.80±0.0199.90±0.07
    Lettuce_romaine_6wk98.54±0.4398.22±0.6594.78±2.0980.62±2.5699.68±0.1599.06±0.3299.63±0.29
    Lettuce_romaine_7wk86.33±1.8896.23±0.2495.85±0.6780.21±2.9199.45±0.5499.70±0.2499.33±0.37
    Vinyard_untrained66.78±6.6663.89±6.3293.85±1.8979.99±0.5994.04±1.5294.79±2.1795.58±1.89
    Vinyard_vertical_trellis83.72±4.8479.78±5.1299.95±0.0291.08±0.7299.20±0.6898.68±0.3399.92±0.02
    OA /%87.79±2.6589.74±1.4396.38±0.7695.00±1.4598.25±0.2898.18±0.3898.65±0.13
    AA /%88.83±2.2191.83±1.4196.29±0.6091.30±0.9898.73±0.4898.99±0.2999.30±0.40
    100K88.93±1.9788.96±2.0595.66±0.9393.31±1.3698.05±0.3197.97±0.0498.17±0.23
    Table 5. Classification results of Salinas dataset
    AlgorithmRBF-SVMEMP-SVMDCNNGANResNetPyResNetDTAResNet
    Time5.639.0112.8510.3917.3717.6518.57
    Table 6. Training time of different classification algorithms
    Aili Wang, Meihong Liu, Dong Xue, Haibin Wu, Lanfei Zhao, Iwahori Yuji. Hyperspectral Image Classification Combined Dynamic Convolution with Triplet Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(10): 1015011
    Download Citation