• Chinese Optics Letters
  • Vol. 15, Issue 2, 022801 (2017)
Jin Li1、2、3, Fengdeng Liu1、2、3, and Zilong Liu1、2、3、4、*
Author Affiliations
  • 1Department of precision instrument, Tsinghua University, Beijing 100084, China
  • 2State Key Laboratory of Precision Measurement Technology and Instruments, Beijing 100084, China
  • 3Collaborative Innovation Center for Micro/Nano Fabrication, Device and System, Beijing 100084, China
  • 4Optic Division, National Institute of Metrology, Beijing 100029, China
  • show less
    DOI: 10.3788/COL201715.022801 Cite this Article Set citation alerts
    Jin Li, Fengdeng Liu, Zilong Liu. Efficient multi-bands image compression method for remote cameras[J]. Chinese Optics Letters, 2017, 15(2): 022801 Copy Citation Text show less

    Abstract

    In this Letter, we propose an efficient compression algorithm for multi-spectral images having a few bands. First, we propose a low-complexity removing spectral redundancy approach to improve compression performance. Then, a bit plane encoding approach is applied to each band to complete the compression. Finally, the experiments are performed on multi-spectral images. The experiment results show that the proposed compression algorithm has good compressive property. Compared with traditional approaches, the proposed method can decrease the average peak signal noise ratio by 0.36 dB at 0.5 bpp. The processing speed reaches 23.81 MPixels/s at the working frequency of 88 MHz, which is higher than the traditional methods. The proposed method satisfies the project application.

    A class of multi-spectral CCD cameras is now heading for a high spatial resolution and multi-spectral bands. These cameras have a few bands, as many as ten bands. These cameras have larger amounts of data than panchromatic cameras. Therefore, it is necessary to compress multi-spectral images of these CCD cameras by using a higher performance compressor. However, these cameras use a so-called “mono-spectral” compressor. The compressor independently compresses each band, which is considered as a panchromatic image to complete. Since the redundancy between bands is not considered, the compression performance is lower. It is not suitable for multi-spectral cameras having a few bands. In this Letter, we provide an efficient compression algorithm for these cameras.

    Considering the application for a satellite[1,2], the complexity of multi-spectral compression approaches is not too high. For multi-spectral images, the compression approaches usually use prediction, transform, and vector quantization. The prediction-based methods use the previous encoded band to predict the current band. The prediction error is encoded by an entropy coding algorithm, such as an adaptive binary arithmetic encoding. The prediction-based approaches are widely used by 3D image (such as multi-spectral, hyper-spectral image) compression. For now, to cover 1D, 2D, and 3D coefficients, prediction algorithms include hundreds of predictors. For the on-board application, the main prediction methods were differential pulse-code modulation (DPCM), adaptive DPCM, Consultative Committee for Space Data Systems-Lossless Data Compression (CCSDS-LDC), CCSDSMultispectral and Hyperspectral Data Compression (CCSDS-MHDC), Joint Photographic Experts Group-Lossless Standard (JPEG-LS), and lookup table (LUT). They obtain the better compression performance for lossless compression. The prediction-based approaches are very simple and realized easily in hardware. However, the prediction-based methods have a much poorer performance on error resilience and a much lower lossy compression performance.

    The transform-based multi-spectral compression methods mainly use the 3D transform. The 3D transform mainly includes two types: (1) an ordinary 3D transform, such as a 3D discrete wavelet transform (3D-DWT) or a 3D discrete cosine transform (3D-DCT); (2) a 2D transform in combination with other transform. For the first type of method, the ordinary 3D transform is applied to obtain transform coefficients. Then, the transformed coefficients are encoded by a 2D embedded zerotree wavelet (2D-EZW)[3], 3D embedded block coding with optimized truncation (3D-EBCOT)[4], 3D set partitioning in hierarchical trees (3D-SPIHT), a 3D set partitioning embedded block (3D-SPECK)[5], and so on. The 3D transform-based methods can remove the spatial, spectral, and sign redundancy of multi-spectral images. Therefore, these methods have much better compression performance. However, they not only have the problem of complex storage managements but also have the high hardware complexity of compressors. In addition, these methods are only suitable for cameras having many bands. For the second type of method, the 2D transform is completed by a 2D-DWT, a 2D-DCT, a fast Fourier transform (FFT), a Walsh–Hadamard transform[6], and so on. The other transform is completed by the Karhunen–Loeve Transform (KLT) or the principal component analysis (PCA)[7,8]. The KLT can remove the spectral redundancy, and the 2D transform removes the spatial redundancy. These approaches also have much better compression performance and are also suitable for the multi-spectral images that have a few bands.

    Considering the spectral redundancy for few bands multi-spectral images, we proposed a low-complexity compression algorithm based on removing spectral redundancy in combination with bit-plane encoding (RSRA-BPE). The proposed method has potential applications in an on-orbit remote sensing off-axis three-mirror camera[9,10].

    A multi-spectral CCD is composed of several CCD arrays in parallel and produces several bands simultaneously. Figure 1 shows the process of multi-spectral CCD imaging. The optics reflected and radiated by the ground target converges on the optical thin film of the CCD surface through the optical system. Each band CCD array captures optical energy to obtain the corresponding spectral band image. Each band image contains 1D spatial information of ground objects. At this point, the 1D spectral and 1D spatial image is obtained by this multi-spectral CCD camera. When the camera moves along the push-broom direction, other 1D spatial information of ground objects is obtained. Therefore, the multi-spectral CCD camera produces 3D images. Because several bands, including the same ground objects, are obtained simultaneously by the same multi-spectral CCD, the 3D images produced have spatial and spectral redundancy. For the same two spatial locations, image blocks A and B located separately in the two adjacent bands, the spectral correlation is defined as ρ(A,B)=i=1m[(aiE[a])(biE[b])]i=1m(aiE[a])2i=1m(biE[b])2,where ai is denoted as the pixels of A, bi is denoted as the pixels of B, E[a] is denoted as the mean value of A, E[b] is denoted as the mean value of B, and m is denoted as the total number of pixels of one image block. According to Eq. (1), we test the spectral correlation of multi-spectral images having a few bands. We use the multi-spectral images that have four bands, which were taken by the SPOT satellite, and the ground standard resolution figure that was obtained by testing the multi-spectral time delay and integration CCD (TDICCD) camera in the calibration laboratory. The spectral correlation coefficients tested are shown in Fig. 2. The correlation coefficient ρ is greater than 0.7 between the adjacent bands. Therefore, comparing the difference between many and few bands multi-spectral images, the multi-spectral images have a stronger spectral correlation, which is considered in the process of compression.

    Process of multi-spectral CCD imaging.

    Figure 1.Process of multi-spectral CCD imaging.

    Spectral correlation testing of multi-spectral images.

    Figure 2.Spectral correlation testing of multi-spectral images.

    To weigh the computational complexity and compression performance, we proposed an efficient low-complexity removing spectral redundancy (LRSR) algorithm for multi-band CCD images.

    We consider every two bands as one group. We define the total number of bands of the multi-spectral CCD camera as P. So, all bands are divided into P/2 groups. We use the Pearson-based approach to regroup the spectral bands. The correlation coefficients of two bands (denoted as X and Y) can be expressed as[11]ρX,Y=Cov(X,Y)SD(X)SD(Y)=E[(XE(X))(YE(Y))]E((XE(X))2)E((YE(Y))2).ρX,Y[1,1] if ρX,Y>0, so X and Y are relevant. If ρX,Y=0, then X and Y are irrelevant. If ρX,Y<0, then X and Y are inversely relevant. The two bands having the maximum value for |ρX,Y| are considered as one group.

    Each group performs removing spectral redundancy (RSR) to produce one main band and one sparse band. The energy of one band of the group is focused into the main band. The correlation of the group is removed. After all groups are processed, the next level RSR is performed. In the next level, the main bands are regrouped to be the first level. That is, every two main bands are placed in one group. In the level, each new group performs the RSR process. After all groups in the level are processed by RSR, the next level RSR is performed the same way as it was previously. Each level uses the same way to process. The number of the level denoted as L is equal to log2(P). When processing level1=L, all bands are processed by RSR. The number of groups processed by RSR is denoted as G. The G can be expressed as G=P2+P4++P2L.

    Figure 3 shows the process of RSR when the band’s number is 4. The process level number is 2. In the first level, there are two groups. Group 1 is processed by RSR to produce the main band G1 and the sparse band G1. Group 2 is processed by RSR to produce the main band G2 and the sparse band G2. In the second level, G1 and G2 are considered to be Group 3. Group 3 is processed by RSR to be the main band G3 and the sparse band G3. Finally, all bands are processed to be one main band G3 and three sparse bands G3, G2, and G1.

    Process of RSR. The spectral bands number is 4.

    Figure 3.Process of RSR. The spectral bands number is 4.

    The RSR is used to remove the correlation of two spectral bands in each group. Consider each band of multi-spectral images in a group as a matrix, the ith band in a group is defined as Hi=[h1,1,ih2,1,ih3,1,ihL,1,ih1,2,ih2,2,ih3,2,ihL,2,ih1,M,ih2,M,ih3,M,ihL,M,i]L×M,i=1,2,where L is line number of the band, and M is the column number of the band. Hi is composed of M line vectors. Each line vector has L elements. Each matrix is regrouped into a new matrix having only one line vector. The new line vector is organized in a line vector stack. Then, H1 and H2 are merged into a new matrix H, which can be expressed as H=[H1H2]=[h1,1,1h2,1,1h3,1,1hL,M,1h1,2,2h2,2,2h3,2,2hL,M,2],where H1 and H2 are the row matrixes stacked by each row of H1 and H2, respectively. The mean value of each band is denoted as B_m, which can be expressed as B_m=[mean1,mean2],where mean1 and mean2 are the mean of H1 and H2, respectively. By subtracting B_m from the value of each band, the M_sub can be expressed as H=[h1,1,1mean1h2,1,1mean1h3,1,1mean1hL,M,1mean1h1,2,2mean2h2,2,2mean2h3,2,2mean2hL,M,2mean2].The covariance matrix of H is denoted as Cov(H), which can be expressed as Cov(H)=14HTH=[cov11cov12cov21cov22].The eigenvectors of Cov(H) are defined as V=[v11v12v21v22].The eigenvector can be computed using the covariance matrix v11=v22=12+(cov11cov22)2η=1v212,v21=v12=cov12|cov12|12cov11cov222η,η=(cov11cov22)2+4cov12cov21.The diagonal matrix λ of Cov(H) is defined as λ=[λ100λ2].The diagonal matrix λ can be computed by Eqs. (7) and (12) as λ1=cov11+cov22+η2,λ2=cov11+cov22η2,since Cov(H)V=Vλ,Cov(H)=VλV1,VTCov(H)V=VTVλV1V=VTVλ.In addition, VTV=[v112+v222v11v21+v12v22v11v21+v12v22v212+v222].Combined with Eqs. (9), (10), Eq. (15) can be expressed as VTV=[v112+v22200v212+v222].So, Eq. (15) can be expressed as VTCov(H)V=[λ1(v112+v222)00λ2(v212+v222)]=Λ,where Λ is a diagonal matrix. In addition, there is VTCov(H)V=14VTHHTV.Consider that G=VTH, so VTCov(H)V=14VTH(VTH)T=14GGT=Cov(G).According to Eqs. (14) and (16), there is Cov(G)=Λ. Because Cov(G) is a diagonal matrix, the value of the off-diagonal elements is zero. Therefore, the elements of G are irrelevant. So, the removing correlation computation equation for multi-spectral images can be expressed as G=VTH.H can be obtained by multi-spectral images. According to Eq. (20), the spectral redundancy can be removed. In reality, our idea of RSR is the same with the KLT. However, our algorithm uses only two bands to perform the computation. Therefore, our algorithm has low complexity.

    In general, the pixel number of the multi-spectral CCD is relatively large, such as 4096 pixels for each CCD. They can cause the high-complexity for Eq. (20). We divided each group into several sub-blocks (See Fig. 4), and each sub-block is processed by RSR. In a group, each band is divided into several sub-blocks. The sub-blocks that have the same spatial location of two bands are regrouped into new 3D images, which can be processed by RSR.

    Spatial blocking.

    Figure 4.Spatial blocking.

    Note that the smaller K2×K1 can impact the compression performance. We test four multi-spectral images, and each group is 512×512 and has four bands.

    From Fig. 5, when K1=K2=64, the compression performance begins to decrease. We use the other multi-spectral images that have a few bands to analyze the relationship between the peak signal-to-noise ratio (PSNR) and the size of the sub-block. The same result is obtained. We weigh the computation complexity against the compression performance, and consider the CCD output line by line. Therefore, we set K2=64, and K1=N, where N is the pixel number of each band.

    Relation between compression performance and the size of the block, where (a) is the testing of the multi-spectral image, and (b) is the tested results.

    Figure 5.Relation between compression performance and the size of the block, where (a) is the testing of the multi-spectral image, and (b) is the tested results.

    Based on the LRSR algorithm, Fig. 6 shows the whole construction of our algorithm of multi-spectral images. The compression algorithm contains two parts: (1) the LRSR unit and (2) the removing spatial correlation (RSC) unit. The LRSR unit is used to remove spectral redundancy. The RSC unit is used to remove spatial redundancy. The LRSR unit has five stages: (1) grouping, (2) blocking, (3) 1-level RSR, (4) grouping, and (5) 2-level RSR. In each level of RSR, Eqs. (4)–(21) are calculated to remove spectral redundancy. The RSC unit has two stages: (1) spatial sparse and (2) bit-plane coding. In the spatial sparse stage, a 2D-DWT is applied to each band. The BPE of the CCSDS-Image Data Compression (IDC) is used to encode wavelet coefficients[12].

    Whole construct of the compression algorithm for multi-spectral images.

    Figure 6.Whole construct of the compression algorithm for multi-spectral images.

    In order to evaluate the compression performance of the proposed RSRA-BPE algorithm, we use the self-development testing device. Figure 7 shows the testing experiment scheme. The testing platform includes an image simulation source, a multi-spectral compression system, ground camera test device, a compression server, and a display system. The compression server can produce the simulated multi-spectral images, which are transmitted to the image simulation source unit. The image simulation source unit adjusts the output line frequency, image size, and output time to simulate the multi-spectral CCD output. The multi-spectral compression system compresses the received simulated multi-spectral images. The compression system uses Virtex-PRO Xilinx FPGA with a 32 bit MicroBlaze processor. The compressed streams are received and decoded by the ground camera test device unit. The reconstructed image is transmitted to the compression server and display system.

    Testing experiment scheme.

    Figure 7.Testing experiment scheme.

    The compression server injects the SPOT multi-spectral images into the image simulation sources to test the compression performance of the proposed approach. Each group of multi-spectral images is 512pixels×512pixels×4. The depth of the pixels is 8 bits/pixel (bpp). We compare our algorithm with the CCSDS-IDC mono-spectral compressor. Figure 8 shows the tested PSNR of the two methods at 0.5–3 bpp. From Fig. 8, the PSNR of our algorithm improves to 0.36 dB more than the CCSDS-IDC mono-spectral compressor at 0.5 bpp. Because we use the multi-level RSR technology to remove the spectral redundancy, our method outperforms the CCSDS-IDC mono-spectral compression method.

    Compression performance comparison with CCSDS-IDC.

    Figure 8.Compression performance comparison with CCSDS-IDC.

    We use multiple QuickBird satellite images to further compare the performance of our method with those of the CCSDS-IDC method and the Hadamard post-transform (H-PT) method. The compression server injects the QuickBird satellite testing multi-spectral images with four bands. The reconstructed images perform the PSNR analysis. We used different images from the testing image database to measure the corresponding PSNRs. The average PSNR is considered as the PSNR of the method. The calculated PSNRs of the different methods are shown in Table 1. We perform other image quality assessments by using the mean measure of structural similarity (MSSIM). The MSSIMs are based on the hypothesis that the human visual system (HVS) is highly adapted for extracting the structure information. The MSSIM values at different compressed ratios are shown in Fig. 9. Because we use several key technologies, such as the multi-level RSR to remove the spectral redundancy and the BPE method, our method outperforms the traditional on-orbit compression methods.

    MSSIM values at different compression ratios.

    Figure 9.MSSIM values at different compression ratios.

    Bit rate (bpp)CCSDS-IDC (dB)H-PT (dB)Our method (dB)
    0.548.814649.025849.7822
    1.052.561953.089853.6402
    1.555.362056.028856.5707
    2.057.218957.832158.3466
    2.558.763658.855859.1888
    3.059.637959.685059.7778

    Table 1. PSNR of Three Different Methods

    In order to analyze the process speed of our algorithm, our algorithm is implemented by an FPGA processor. We use a self-developed CCD camera to test the compression time of our method. The line frequency of the CCD is 1.8094 kHz. The following compression speed of our algorithm is only used to perform the evaluations of compression speed. The compression algorithm is not optimized for the FPGA implementation. These evaluations are based on the lossy compression of remote sensing multi-spectral images with four bands. The size of each band is 3072×128. Table 2 shows the comparison results of the processing speed of our algorithm with traditional approaches. From Table 1, the data throughput of our algorithm reaches 23.81 MPixels/s at an 88 MHz working frequency, which indicates that less time is spent than the JPEG2000, KLT, and 3D-SPIHT approaches. The compression time of 128×3072 needs only 16.51 ms. According to the different principles of CCD imaging, our compression algorithm can be optimized on an FPGA. An optimized implementation on an FPGA can spend minimal time. Overall, our algorithm has low complexity and high performance and is very suitable for space application.

    MethodsData Throughput (MSPS)a
    KLT[13]9.77
    3D-SPIHT[14]16.04
    JPEG2000[15]5.52
    Our approach23.81

    Table 2. Data Throughput Comparison with Traditional Methods

    In conclusion, we propose an efficient compression algorithm for multi-spectral images that have a few bands. First, we propose a low-complexity RSR approach to improve compression performance. Then, a BPE approach is applied to each band to complete compression. Finally, the experiments are performed on multi-spectral images. The experiment results show that the proposed compression algorithm has good compressive properties. Compared with traditional approaches, the proposed method can decrease the average PSNR by 0.36 dB at 0.5 bpp. However, the processing speed reaches 23.81 MPixels/s at the working frequency of 88 MHz, which is higher than traditional methods. The proposed the method satisfies the project application. Our method adopts the BPE method for encoding the transformed coefficients. However, BPE cannot remove the residual spectral redundancy. In the future, the distributed source coding method can replace BPE for integrating the proposed method for removing the residual spectral redundancy. The proposed method can also be integrated into a compressed sensing approach[16] to reduce the computational complexity of the camera compressor.

    Jin Li, Fengdeng Liu, Zilong Liu. Efficient multi-bands image compression method for remote cameras[J]. Chinese Optics Letters, 2017, 15(2): 022801
    Download Citation