• Infrared and Laser Engineering
  • Vol. 54, Issue 5, 20250131 (2025)
Kai QIN, Yuxi HAO, Yingjun ZHAO*, Xin CUI..., Yuechao YANG, Ling ZHU and Qinglin TIAN|Show fewer author(s)
Author Affiliations
  • National Key Laboratory of Uranium Resource Exploration-Mining and Nuclear Remote Sensing, Beijing Research Institute of Uranium Geology, Beijing 100029, China
  • show less
    DOI: 10.3788/IRLA20250131 Cite this Article
    Kai QIN, Yuxi HAO, Yingjun ZHAO, Xin CUI, Yuechao YANG, Ling ZHU, Qinglin TIAN. A survey on hyperspectral remote sensing unmixing techniques based on autoencoders(inner cover paper·invited)[J]. Infrared and Laser Engineering, 2025, 54(5): 20250131 Copy Citation Text show less
    Self coding network architecture diagram
    Fig. 1. Self coding network architecture diagram
    Structure diagram of hyperspectral unmixing autoencoder network
    Fig. 2. Structure diagram of hyperspectral unmixing autoencoder network
    Flowchart of DETN method for unmixing[26]
    Fig. 3. Flowchart of DETN method for unmixing[26]
    Abundance maps for the run with the best mSAD score for all methods for the Urban dataset[8]
    Fig. 4. Abundance maps for the run with the best mSAD score for all methods for the Urban dataset[8]
    The architecture of HapkeCNN[38]
    Fig. 5. The architecture of HapkeCNN[38]
    ModelFunctionRegularizationInitialisationDenoisingOutlierNumber of layers
    注:ReLU为修正线性单元,是一种广泛使用的激活函数;VCA为顶点成分分析,是一种常用于高光谱影像端元提取的算法;SAD为光谱角距离,是一种用于衡量两个光谱向量之间相似性的方法;FCLS为全约束最小二乘法,是一种在解决数据过拟合等问题中常用的方法;BN为批量归一化,是一种在深度学习中用于加速训练、稳定训练过程并提高模型性能的技术。
    mDAUParameterised sigmoid activation function--mDA-5
    Autoencoder cascadeAugmented logistic activation function----Multi-layer
    Neural network autoencoder with spectral information divergence objectiveSoft-thresholded ReLU activation function--Batch normalization indirect denoising, adaptive thresholding for noise suppression-7 layers, the encoder contains an input layer, a linear hidden layer, a BN layer, an adaptive thresholding ReLU layer, an ASC normalisation layer, and a linear output layer; the decoder is a single linear layer
    EndNetReLU activation functionL1 regularisation (enhanced sparsity) and L2 regularisation (smoothing weights)VCA or DMaxDenoising auto encoder-Multi-layer, the overall architecture is an encoder and decoder, but the encoder contains multiple sub-layers within the encoder
    Hyperspectral unmixing using a neural network autoencoderThe following three activation functions can be selected for the hidden layer: ReLU, leaky ReLU, Sigmoid, the dynamic soft-threshold ReLU activation function for the sparse layer, and the linear activation function for the output layerGaussian dropout--SAD objective function is based on spectral angle differences, reducing the effect of outliersMulti-layer, the overall architecture is an encoder and decoder, but the encoder contains multiple sub-layers within the encoder
    SNSALogistic activation function-VCAMinvol and sparse codingAutoencoder detectionNot explicitly stated, but it is mentioned that the first set of Autoencoders is intended to detect outliers and the last encoder is used for demixing
    uDASReLU activation function is selected for the hidden layer and the linear activation function is selected for the output layerL21 paradigm regularisationThe decoder weights are initialised by VCA, the hidden layer is initialised by FCLS and the encoder is initialised by the pseudo-inverse of the hidden layer and the input dataEnd-to-end denoising-4 layers, encoder contains input layer, hidden layer, output layer, a total of 3 layers; decoder is a single linear layer
    MTAEULeakyReLU activation function is selected for the hidden layer and the linear activation function is selected for the output layerGaussian dropout-Batch normalization indirect noise suppressionSAD objective function is based on spectral angle differences, reducing the effect of outlier7 layers, encoder contains input layer, shared hidden layer, branch hidden layer, BN layer, Softmax layer, Gaussian discard layer, total 6 layers; decoder is a single linear layer
    DAENSigmoid activation function is chosen for Stacked Autoencoders (SAEs), ReLU activation function for variational Autoencoders (VAEs)Dropout, MinVolVCAStacked Autoencoders (SAEs)-Multi-layer, with a deep autoencoder structure, the number of neurons in the hidden layer is the same as the number of end-elements
    Table 1. The mainstream self coding unmixing network for coding design
    Kai QIN, Yuxi HAO, Yingjun ZHAO, Xin CUI, Yuechao YANG, Ling ZHU, Qinglin TIAN. A survey on hyperspectral remote sensing unmixing techniques based on autoencoders(inner cover paper·invited)[J]. Infrared and Laser Engineering, 2025, 54(5): 20250131
    Download Citation