• Chinese Journal of Lasers
  • Vol. 48, Issue 9, 0910001 (2021)
Jiamin Liu*, Chao Zheng, Limei Zhang, and Zehua Zou
Author Affiliations
  • Key Laboratory of Optoelectronic Technique System of the Ministry of Education, Chongqing University, Chongqing 400044, China
  • show less
    DOI: 10.3788/CJL202148.0910001 Cite this Article Set citation alerts
    Jiamin Liu, Chao Zheng, Limei Zhang, Zehua Zou. Hyperspectral Image Classification Method Based on Image Reconstruction Feature Fusion[J]. Chinese Journal of Lasers, 2021, 48(9): 0910001 Copy Citation Text show less

    Abstract

    Objective Hyperspectral remote-sensing images contain abundant information and provide a large amount of data. For this reason, hyperspectral remote-sensing imaging is widely used in environmental detection, target recognition, and other fields. This paper focuses on feature extraction and classification methods for hyperspectral images. The traditional classification method does not fully utilize the spatial information in hyperspectral datasets and tends to ignore the effect of background points on the classification. The present paper proposes a classification based on feature fusion using a hyperspectral image reconstruction method. The fused features fully include the spatial information of the data image. The method accurately classifies the images in the Indian Pines and Pavia University datasets. Our basic strategy and findings are anticipated to assist the design of new classification methods of hyperspectral images.

    Methods The proposed method fuses the features extracted by image reconstruction. The method first extracts the local binary patterns (LBPs) of each pixel to obtain the LBP feature value. Second, it extracts the spatial neighborhood block of each pixel and removes the redundant background pixels in each block based on the known label information of the image. The result is a new spatial neighborhood block. Each pixel is weighted by the spectral distance, and its characteristic value is calculated and reconstructed. The LBP eigenvalue of each pixel and its reconstructed eigenvalue are superimposed into a reconstructed fused eigenvalue. Finally, the pixels are classified by a K nearest neighbor (KNN) classifier, and the type of each test sample point is determined by the Euclidean distance between the test sample and the training samples. The classification performance of the method is experimentally evaluated on the two hyperspectral datasets from the Indian Pines and Pavia University.

    Results and Discussions The classification performances of our method and several existing methods are evaluated by the Kappa coefficient, overall accuracy, and average accuracy. To achieve robust results, 10 experiments are conducted under the same experimental conditions, and the results are averaged to give the final result. The proposed reconstruction feature fusion method (RSFM) outperformed the related classification algorithms. Among the competing methods, the KNN, spectral angle mapper (SAM), and state vector machine (SVM) methods use only the spectral information in the image data. SVM with composite kernel and edge-preserving filtering (EPF) combine the spectral and spatial information, class-dependent sparse representation classifier and correlation coef?cient and joint sparse representation fuse the multifeature information, and LBP-SVM and LBP-SAM use the LBP features. Relative to the existing algorithms, our method improves the classification accuracies of the Indian Pines and Pavia University datasets by around 2.12--30.45 percentage points (Table 2) and 0.82--16.12 percentage points (Table 3), respectively. The proposed method not only considers the LBP texture characteristics of the pixel, but also optimizes the reconstruction of the spatial domain of the data. When using the spatial domain information, it removes the interfering background points, thus reducing the number of pixels to be measured. The misclassification probability is reduced, and the classification effect is significantly improved over those of the other methods.

    Conclusions The proposed hyperspectral classification method effectively improves the classification accuracy of hyperspectral images by extracting the LBP feature of each pixel (thus obtaining the LBP feature value) and removing the interference of spatial background points, which eliminates the redundant information in the image. Consequently, the pixel misclassification probabilities are reduced, and the discrimination ability is enhanced. Experiments on two widely used hyperspectral datasets confirmed the superior performance of the proposed RSFM method over other relevant classification algorithms. The classification accuracy is improved by approximately 2.12--30.45 percentage points on the Indian Pines dataset and 0.82--16.12 percentage points on the Pavia University dataset. Therefore, the method is both valid and feasible.

    Jiamin Liu, Chao Zheng, Limei Zhang, Zehua Zou. Hyperspectral Image Classification Method Based on Image Reconstruction Feature Fusion[J]. Chinese Journal of Lasers, 2021, 48(9): 0910001
    Download Citation