mDAU | Parameterised sigmoid activation function | - | - | mDA | - | 5 |
Autoencoder cascade | Augmented logistic activation function | - | - | - | - | Multi-layer |
Neural network autoencoder with spectral information divergence objective | Soft-thresholded ReLU activation function | - | - | Batch normalization indirect denoising, adaptive thresholding for noise suppression | - | 7 layers, the encoder contains an input layer, a linear hidden layer, a BN layer, an adaptive thresholding ReLU layer, an ASC normalisation layer, and a linear output layer; the decoder is a single linear layer |
EndNet | ReLU activation function | L1 regularisation (enhanced sparsity) and L2 regularisation (smoothing weights) | VCA or DMax | Denoising auto encoder | - | Multi-layer, the overall architecture is an encoder and decoder, but the encoder contains multiple sub-layers within the encoder |
Hyperspectral unmixing using a neural network autoencoder | The following three activation functions can be selected for the hidden layer: ReLU, leaky ReLU, Sigmoid, the dynamic soft-threshold ReLU activation function for the sparse layer, and the linear activation function for the output layer | Gaussian dropout | - | - | SAD objective function is based on spectral angle differences, reducing the effect of outliers | Multi-layer, the overall architecture is an encoder and decoder, but the encoder contains multiple sub-layers within the encoder |
SNSA | Logistic activation function | - | VCA | Minvol and sparse coding | Autoencoder detection | Not explicitly stated, but it is mentioned that the first set of Autoencoders is intended to detect outliers and the last encoder is used for demixing |
uDAS | ReLU activation function is selected for the hidden layer and the linear activation function is selected for the output layer | L21 paradigm regularisation | The decoder weights are initialised by VCA, the hidden layer is initialised by FCLS and the encoder is initialised by the pseudo-inverse of the hidden layer and the input data | End-to-end denoising | - | 4 layers, encoder contains input layer, hidden layer, output layer, a total of 3 layers; decoder is a single linear layer |
MTAEU | LeakyReLU activation function is selected for the hidden layer and the linear activation function is selected for the output layer | Gaussian dropout | - | Batch normalization indirect noise suppression | SAD objective function is based on spectral angle differences, reducing the effect of outlier | 7 layers, encoder contains input layer, shared hidden layer, branch hidden layer, BN layer, Softmax layer, Gaussian discard layer, total 6 layers; decoder is a single linear layer |
DAEN | Sigmoid activation function is chosen for Stacked Autoencoders (SAEs), ReLU activation function for variational Autoencoders (VAEs) | Dropout, MinVol | VCA | Stacked Autoencoders (SAEs) | - | Multi-layer, with a deep autoencoder structure, the number of neurons in the hidden layer is the same as the number of end-elements |