• Opto-Electronic Advances
  • Vol. 1, Issue 10, 180024 (2018)
[in Chinese]1、2、3、4, [in Chinese]1、2、*, [in Chinese]1、2, [in Chinese]2, and [in Chinese]4
Author Affiliations
  • 1Key Laboratory of Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China
  • 2Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
  • 3University of Chinese Academy of Sciences, Beijing 100049, China
  • 4Yantai Institute of Coastal Zone Research, Chinese Academy of Sciences, Yantai 264003, China
  • show less
    DOI: 10.29026/oea.2018.180024 Cite this Article
    [in Chinese], [in Chinese], [in Chinese], [in Chinese], [in Chinese]. Underwater image enhancement based on red channel weighted compensation and gamma correction model[J]. Opto-Electronic Advances, 2018, 1(10): 180024 Copy Citation Text show less

    Abstract

    Due to the special characteristics of light in water, the information of the red channel is seriously attenuated in collected image. This causes other colors to dominate the image. This paper proposes an underwater image enhancement algorithm based on red channel weighted compensation and gamma correction model. Firstly, by analyzing the attenuation characteristics of RGB channels, the intensity and the edge information of red channel are compensated by weighting the attenuation coefficient ratio between different channels to correct the chromaticity. Then the gamma correction model is employed to stretch the intensity range to enhance the contrast which makes the image look clearer. The experimental results show that the proposed algorithm can correct the color cast effect and improve the contrast by nearly 2 times for the underwater images with too much red component attenuation.

    Introduction

    The ocean contains a large amount of resources, and the video acquired by underwater imaging equipment is essential when exploring marine resources1. In recent years, more and more underwater robot equipments have been put into applications. Underwater resource exploration, underwater facilities maintenance, and submarine tourism require a large amount of underwater data2. Due to the special optical environment in ocean, the acquired video is prone to suffer from quality degradation such as color shift, edge blur and contrast reduction3. Degraded images cause huge trouble for subsequent research work4. Therefore, underwater imaging has received much attention and many scholars have done a lot of researches5.

    The current improvement methods of underwater images mainly fall into two categories6: One category is based on traditional underwater image enhancement algorithms such as histogram equation7, wavelet transform8, sharpening9 and Retinex10 etc. They do not consider the principle of underwater imaging, but mainly optimize the color or contrast by adjusting the pixel values. In 2007, Iqbal et al.11 proposed an underwater image enhancement algorithm using an integrated color model (ICM) based on sliding histogram, which successively stretched in RGB and HSI color gamut to enhance the image. But the algorithm needs to manually adjust the parameters according to the input image. In 2010, Iqbal et al.12 proposed an unsupervised color correction method (UCM) based on Von Kries hypothesis (VKH) and selective histogram stretching. The UCM algorithm can effectively improve the brightness, but the restored image still shows chromaticity unevenness. In 2015, Ghani et al.13 used the Rayleigh distribution function to redistribute the original image based on Iqbal's work. This algorithm can improve the contrast of the image, but it is easy to introduce noise and reduce the signal-to-noise ratio. The other category is based on image restoration algorithm which establishes scattering model to reconstruct clear image. Zhao et al.14 modeled the background light of the scattering model to provide optimized parameter design for underwater imaging system under natural light and artificial lighting conditions. But the experimental results under this model had not been given by the authors. Since the underwater environmental model is similar to the natural outdoor imaging model, many researchers applied the Dark Channel Prior (DCP) dehazing algorithm proposed by He et al.15 to underwater image restoration. Li et al.16 used the guided triangular bilateral filtering based on the DCP algorithm to restored underwater image. But the image is overall dark after processing with this algorithm. Yang et al.17 proposed to use minimum and medium filtering instead of soft matting to reduce computational complexity based on DCP. This algorithm uses color correction to improve the contrast, but low quality restoration results limit the visual effect of the output image. Drews et al.18 proposed an underwater DCP (UDCP) algorithm based on blue-green channel to estimate more accurate transmission map. But the restored image is prone to brightness saturation. Image restoration based on dehazing model is often accompanied with complex calculation which is not suitable for real-time video system.

    Take typical image taken at a depth of about 10 m in the Bohai Sea by Yantai Institute of Coastal Zone Research, Chinese Academy of Sciences as an example, the red component in the image is attenuated sharply. The blue-green background dominates the whole image. And the captured image is blurred. For images in similar scenes, we tested a variety of existing underwater enhancement and restoration algorithm. But the results after processing are not satisfactory. It is still challenging to restore images with red component attenuation. In this paper, a new underwater enhancement algorithm for similar complex scenes is proposed without estimating complex water parameters. The following arrangements are as follows: At first, we analyze the characteristics of typical ocean image attenuation and describe problems pertaining to underwater images. Then the enhancement model and algorithm are introduced in detail. Next we compare the experimental results with other literatures and conclude this paper in the end.

    Characteristics analyze on typical underwater images

    Water absorbs light with longer wavelength significantly. Most of the red light can only penetrate 2-3 m19. The light of different wavelengths decays with depth in water as shown in Fig. 1. In addition, underwater imaging is also affected by other factors such as distance, turbidity of water and illumination. The attenuation of underwater light during propagation is consistent with Lambert-Beer law20. The law states that the attenuation of light in the medium is exponentially related to the transmission distance, as shown in equation (1):

    Light absorption at different wavelengths underwater.

    Figure 1.Light absorption at different wavelengths underwater.

    where βλ is a transmission factor related to wavelength λ. d(x) is the transmission distance. And tλ(x) is the transfer function of light.

    The typical images collected about 10 m deep in the Bohai Sea are shown in Fig. 2. Obviously, the green color dominates the whole image.

    The sample images.

    Figure 2.The sample images.

    As shown in Fig. 3, we separate the RGB channels and calculate the data distribution in different channels. The grayscale values in the red channel are basically distributed in the extreme left side of the histogram. The edge features in the red channel are seriously lost, which results in uneven color distribution in RGB image. The channel with missing information is a great obstacle to the subsequent image restoration. Therefore, it is very meaningful to propose an algorithm to estimate the pixel intensity and edge information in the red channel.

    RGB channels and corresponding histogram distribution. (a) Original image. (b) R channel. (c) G channel. (d) B channel. (e) Histogram distribution of R. (f) Histogram distribution of G. (g) Histogram distribution of B.

    Figure 3.RGB channels and corresponding histogram distribution. (a) Original image. (b) R channel. (c) G channel. (d) B channel. (e) Histogram distribution of R. (f) Histogram distribution of G. (g) Histogram distribution of B.

    Proposed algorithm

    Red channel weighted compensation

    In RGB color space, different ratios of the three primary colors can produce various intermediate colors, such as white when the values of three primary colors are [255, 255, 255]. If the images are captured in outdoor conditions, the color of the sky background or artificial lighting area should tend to be at the vertex of the RGB color cube, as shown in Fig. 4. In theory, the pixel value in the red rectangle in Fig. 4(a) should approach the value in Fig. 4(b). The absence of red channel's information in Fig. 4(a) causes the background to be blue-green.

    (a) Underwater image. (b) Estimated theoretical value in red rectangle. (c) RGB color cube.

    Figure 4.(a) Underwater image. (b) Estimated theoretical value in red rectangle. (c) RGB color cube.

    The Gray World21 algorithm considers that for an image with large numbers of color changes, the average value of RGB channels tends to the same gray value. Similarly, when a natural image in outdoor condition is separated into RGB channels, it can be found that the separated three images are similar in the edge. In DCP algorithm, the guided image is selected as any of the separated RGB images when using guided filtering22 to refine the transmission map. The typical images collected in Bohai Sea are mainly attenuated in red channel. The intensity and edge information are almost completely missing in red channel while the blue and green channels still retains the edge features. So we consider using the information of other channels to compensate the red channel.

    The experiment results23 show that there is an approximate linear relationship between the underwater scattering coefficient bλ and the wavelength λ. They derived this inference by analyzing the data of nine wavelengths collected in different waters using least squares regression analysis. This inference was widely quote by later scholars. It can be formulated as follows:

    $ {{b}_{\lambda }}=(-0.00113\lambda +1.62517)b({{\lambda }_{\text{r}}})\;\;, $

    where λr is the reference wavelength depending on the property of the measuring device (i.e. 555-nm for an AC9 meter in Ref. 23). In this paper, we only consider the proportional relationship of the scattering coefficient of different wavelengths, so b(λr) does not need to be obtained by additional means in advance. We select the wavelength of red, green and blue to be 620 nm, 540 nm and 450 nm.

    The attenuation coefficient cλ for different wavelength λ is inversely proportional to its corresponding background light Bλ, and is proportional to the scattering coefficient bλ24. So the ratios of attenuation coefficient between three channels can be calculated as follows:

    $ \frac{{{c}_{\text{g}}}}{{{c}_{\text{r}}}}=\frac{{{b}_{\text{g}}}{{B}_{\text{r}, \infty }}}{{{b}_{\text{r}}}{{B}_{\text{g}, \infty }}}\ \ , $

    $ \frac{{{c}_{\text{b}}}}{{{c}_{\text{r}}}}=\frac{{{b}_{\text{b}}}{{B}_{\text{r}, \infty }}}{{{b}_{\text{r}}}{{B}_{\text{b}, \infty }}}\ \ , $

    where Bλ, ∞ is the background light at infinity. We take the brightest 0.1% pixels in each channel as the background light in this paper.

    The attenuation of the underwater image remains approximately unchanged in a local region. So median filtering is adopted to keep the local intensity information of each channel. Define the three channels after filtering as Rmed, Gmed and Bmed:

    $ {{R}_{\text{med}}}={{f}_{\text{medfilt}}}(R)\ \ , $

    $ {{G}_{\text{med}}}={{f}_{\text{medfilt}}}(G)\ \ , $

    $ {{B}_{\text{med}}}={{f}_{\text{medfilt}}}(B)\ \ , $

    where fmedfilt is the median filtering operation on each channel. The size of filtering widow is defined as 15×15.

    We know that the attenuation in red channel is strong while the attenuation in green and blue channels is weak, so we compensate the intensity information of red channel based on the other two channels. Define the compensation coefficients of three channels as ωr, ωg and ωb. According to the attenuation ratio, the compensation coefficients are normalized to:

    $ {{\omega }_{\text{r}}}={{(1+\frac{{{c}_{\text{g}}}}{{{c}_{\text{r}}}}+\frac{{{c}_{\text{b}}}}{{{c}_{\text{r}}}})}^{-1}}\ \ , $

    $ {\omega _{\rm{g}}} = \frac{{{c_{\rm{g}}}}}{{{c_{\rm{r}}}}} \cdot {(1 + \frac{{{c_{\rm{g}}}}}{{{c_{\rm{r}}}}} + \frac{{{c_{\rm{b}}}}}{{{c_{\rm{r}}}}})^{ - 1}}\;\;, $

    $ {\omega _{\rm{b}}} = \frac{{{c_{\rm{b}}}}}{{{c_{\rm{r}}}}} \cdot {(1 + \frac{{{c_{\rm{g}}}}}{{{c_{\rm{r}}}}} + \frac{{{c_{\rm{b}}}}}{{{c_{\rm{r}}}}})^{ - 1}}\;\;, $

    Then the red channel after weighted compensation can be calculated as:

    $ {R_{{\rm{new}}}} = {\omega _{\rm{r}}} \times R + {\omega _{\rm{g}}} \times G + {\omega _{\rm{b}}} \times B\;\;, $

    As shown in equation (11), if the intensity and edge information has been attenuated to a very low level, the information in the red channel can be compensated in this way. Note that the edge information is blurred after median filtering, so guided filtering22 is applied to refine the compensated red channel. The guided filtering filters the target image q through a guided image I, making the final output image similar to the target image q in intensity, but the edge similar to the guided image I. The output image q and the guided image I can be represented by the following local linear model:

    $ {q_i} = {a_k} \cdot {I_i} + {b_k}, \forall i \in {\omega _k}\;\;, $

    where i, k is the pixel index, and a, b is the coefficient of the linear model when the window located at k.

    Since the green channel is rich in edge, we use green channel as the guided image to refine the compensated red channel Rnew to obtain the final output red channel RnewGF. Finally, the compensated red channel and the green, blue channel are combined into a new RGB image. The flow of red channel weighted compensation is shown in Fig. 5. Figure 5(a) is the original image. Figure 5(b) is the red channel after local intensity compensation. Figure 5(c) is the red channel after guided filtering on Fig. 5(b), and Fig. 5(d) is the final compensated image. It is obvious that the red channel is compensated and the background is bright and natural.

    (a) Original image. (b) Red channel after compensation. (c) Red channel after guided filtering. (d) New RGB image.

    Figure 5.(a) Original image. (b) Red channel after compensation. (c) Red channel after guided filtering. (d) New RGB image.

    Gamma correction model

    After compensating the background color in the previous part, the resulting image temporarily retains low contrast and poor visual effect which is not satisfactory enough. So in this part, we use a method based on grayscale stretching to improve the contrast of the image.

    Firstly, we calculate the histogram distribution of three channels on the compensated image as shown in Fig. 6(a). Then the cumulative histogram can be calculated according to the histogram, so that the cumulative proportion is sorted by intensity as shown in Fig. 6(b). The gamma correction is based on the whole intensity distribution of the image. Considering the extreme points in the image such as noise caused by the environmental conditions or the imaging sensors, we search the maximum and minimum values in the original image by means of range search. Based on the previous data analysis, the grayscale value of the three channels is only distributed in a partial range. We define the minimum value before stretching as Ilow and search for Ilow as follows:

    $ {I_{{\rm{low}}}} = \{ i|\min (\sum\nolimits_{j = 0}^i {hist(j)} \ge {r_1}), \;0 \le i \le 255\} \;\;, $

    (a) Histogram. (b) Cumulative histogram.

    Figure 6.(a) Histogram. (b) Cumulative histogram.

    where $ \sum\nolimits_{j = 0}^i {hist(j)} $ is the proportion of pixels whose intensity is less than or equal to i, and r1 is the pre-set value.

    Similarly, we define the maximum value before stretching as Ihigh and search for Ihigh as follows:

    $ {I_{{\rm{high}}}} = \{ i|\min (\sum\nolimits_{j = 0}^i {hist(j)} \ge {r_2}), \;0 \le i \le 255\} \;\;, $

    where r2=1-r1.

    The process of selecting the stretching interval [Ilow, Ihigh] in the cumulative histogram is shown in Fig. 6. In order to facilitate the reader's viewing, we set r1=0.1 in Fig. 6(b). Then its corresponding abscissa is the selected Ilow. Ihigh can be obtained in the same way.

    We define the range after stretching as [Olow, Ihigh]. Theoretically, the image gets the maximum contrast when the range is [0, 255] after stretching. We use the following gamma correction formula to do the stretching.

    $ O(x) = \left\{ {\begin{array}{*{20}{c}} {{O_{{\rm{low}}}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{\rm{, }}I(x) \le {I_{{\rm{low}}}}}\\ {{O_{{\rm{low}}}} + ({O_{{\rm{high}}}} - {O_{{\rm{low}}}}) \times {{(\frac{{I(x) - {I_{{\rm{low}}}}}}{{{I_{{\rm{high}}}} - {I_{{\rm{low}}}}}})}^\gamma }\;{\rm{, }}{I_{{\rm{low}}}} < I(x) \le {I_{{\rm{high}}}}}\\ {{O_{{\rm{high}}}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{\rm{, }}{I_{{\rm{high}}}} < I(x)} \end{array}} \right.. $

    In the original image, the pixel whose grayscale value is less than Ilow is assigned to Olow in the output image. The pixel whose grayscale value is greater than Ihigh is assigned to Ohigh. For the correction parameter γ, the range close to Ilow is stretched and the range close to Ihigh is compressed if γ < 1, the image will become brighter. On the contrary, if γ > 1, the image will become darker. The gamma correction model is shown in Fig. 7.

    Gamma correction curve.

    Figure 7.Gamma correction curve.

    The process of image after weighted compensation and gamma correction is shown in Fig. 8. The image shown in Fig. 8(c) is the result when r1=0.01, r2=0.99 and γ= 0.8. The contrast is significantly improved on the basis of weighted compensation. For typical image collected in Bohai Sea, the restoration results are shown in Fig. 9 when γ ranges from 0.3 to 1.5. When γ= 0.3, the overall brightness is enhanced and the upper part of the image is supersaturated. As γ increases, the brightness of image gradually decreases. It is obvious that the restored image has a more natural visual effect.

    Algorithm flow. (a) Original image. (b) After compensation. (c) γ =0.8.

    Figure 8.Algorithm flow. (a) Original image. (b) After compensation. (c) γ =0.8.

    (a) Original image. (b) γ =0.3. (c) γ =0.6. (d) γ =0.9. (e) γ =1.2. (f) γ =1.5.

    Figure 9.(a) Original image. (b) γ =0.3. (c) γ =0.6. (d) γ =0.9. (e) γ =1.2. (f) γ =1.5.

    Another advantage of this algorithm is that the transition between bright area and other areas is smooth which can prevent distortion from external lighting sources. More experimental results are shown in the next section.

    Results and discussion

    The experimental video is captured with a GoPro camera at a depth of about 10 m in the Bohai Sea by Yantai Institute of Coastal Zone Research. The experiment hardware platform is a desktop with Inter(R) Core(TM) i5-6500 CPU@3.20 GHz, 16G RAM and NVDIA GTX 950, and the testing software is VS2013 running on Windows 10. In order to verify the versatility of the proposed algorithm, we also select some traditional underwater images for simulation comparison. In our experiments, we set the parameters to r1=0.01, r2=0.99 and γ = 0.8.

    Subjective evaluation

    Figure 10(a) shows four traditional underwater images selected from other literature. We compared our algorithm with He's DCP algorithm15, Histogram equalization7, Gray World21 and Iqbal's UCM algorithm12 in Figs. 10(b)-10(f). As shown in Fig. 10(b), the result images by DCP algorithm are still heavily color cast with only minor changes in brightness. The image processed by the histogram equalization has a good visual effect as shown in Fig. 10(c). But if we watch carefully and we can find that the fish in second image is obviously reddish. Besides, the third image and the fourth image are partially reddish. We think that it is a phenomenon of over saturation. Although it has a good visual effect, it does not completely conform to the real scene. As shown in Fig. 10(d), the images processed by the Gray World algorithm are dark overall. This algorithm does not work well for images with a large number of monochromatic patches. This algorithm is more suitable for scenes such as image color cast caused by different light sources. The images processed by Iqbal's UCM algorithm show color distortion in Fig. 10(e). And noises appear in some restored images which have strong background light. As shown in Fig. 10(f), the algorithm proposed in this paper compensates the red channel caused by light absorption. The images look natural and the contrast is greatly improved. And the transition between background and foreground is smooth. Subjectively, the algorithm proposed in this paper can greatly improve the quality of underwater images. Compared with some commonly used algorithms, it also shows some advantages.

    Comparison of different methods. (a) Original images. (b) DCP. (c) Histogram equalization. (d) Gray World. (e) UCM. (f) Our results.

    Figure 10.Comparison of different methods. (a) Original images. (b) DCP. (c) Histogram equalization. (d) Gray World. (e) UCM. (f) Our results.

    For images collected in Bohai Sea, Fig. 11 compares the results processed by commonly used algorithms and our algorithm. We compared our algorithm with histogram equalization7, Drews's UDCP algorithm18, Gray World21 and Iqbal's UCM algorithm12. As shown in Fig. 11(b), the intensities of pixels in red channel are amplified by histogram equalization, which make the result images look very red. The result images processed by UDCP algorithm are shown in Fig. 11(c), since the UDCP algorithm only takes the blue and green channels into account when calculating the dark channel, the red channel still cannot be well restored. The restored image only has a little increase in brightness, and the upper half of the image appears to be over saturated. As shown in Fig. 11(d), the images in this scene after the restoration of the Gray World algorithm show color block effect. The result images processed by UCM algorithm are shown in Fig. 11(e). Since one of the steps in the UCM algorithms is white balance, similarly, color block effect still appears in the result images. But the contrast is significantly improved on the basis of the Gray World algorithm. Our results are shown in Fig. 11(f). It is obvious that the background has been compensated. The restored images look natural and have similar visual effect to the human eye in outdoor conditions.

    Comparison of different methods of actual underwater images. (a) Original images. (b) Histogram equalization. (c) UDCP. (d) Gray World. (e) UCM. (f) Our result.

    Figure 11.Comparison of different methods of actual underwater images. (a) Original images. (b) Histogram equalization. (c) UDCP. (d) Gray World. (e) UCM. (f) Our result.

    Some frames of the restored video are shown in Fig. 12. Frame 940, Frame 1300 and Frame 1360 containing strong background light. The restored background light is similar to the background light in natural outdoor conditions. And the overall effect of the restored images tends to be consistent. The restored video plays a very important role in underwater real-time research. Related video is in the in the supplementary information.

    Video restoration by proposed algorithm.

    Figure 12.Video restoration by proposed algorithm.

    Objective evaluation

    Since subjective evaluation is easily influenced by other factors such as knowledge background, mood and environment, the conclusions from different observers may differ. We also use objective evaluation factors to evaluate the quality of images. For the images processed in Fig. 11, we calculate the entropy and contrast to evaluate the quality of the restored images. Entropy represents the amount of information contained in the image. The larger the entropy of the restored image, the more information it contains. The contrast indicates the edge information of the image. The greater the contrast, the better the visual effect of the image. Both evaluation indicators reflect the clarity of the image.

    As we can see in Table 1 and Table 2, since the restored images by UDCP are over saturated, too much information is missing. The entropy and contrast are reduced on the basis of the original images. Since the Gray World algorithm is mainly for color adjustment and the images are not enhanced, the restored images do not add much extra information. The entropy of the images processed by histogram equalization is increased by about 10% and the contrast is improved by about 100%. The clarity of images processed by the UCM algorithm is also slightly improved. The entropy is increased by about 13% in Example 1 and Example 2. The contrast is improved by about 40% on the whole. The entropy of images restored by proposed algorithm is increased by about 25%, and the contrast is improved by about 2 times. We can draw a conclusion that the proposed algorithm has better performance than other algorithms.

    ImagesOriginal imageHistogram equalizationUDCPGray WorldUCMProposed algorithm
    Example 16.05917.38435.66615.29536.79937.6016
    Example 26.26427.40766.02315.51676.81027.6598
    Example 36.89837.36466.37456.09436.61687.7037
    Example 46.65937.37146.22555.94506.75467.7170
    Example 56.71077.39566.32835.96426.77967.7500

    Table 1. The entropy of images.

    ImagesOriginal imageHistogram equalizationUDCPGray WorldUCMProposed algorithm
    Example 10.52961.61130.00340.61980.71791.2163
    Example 20.60921.16370.00360.72380.92141.3514
    Example 30.72251.44380.00380.80021.19211.4655
    Example 41.00481.93960.00561.29231.41221.9548
    Example 50.96351.75280.00541.10691.30131.8473

    Table 2. The contrast of images.

    Based on the subjective visual effect and the objective evaluation, we think our algorithm is more suitable for this scenario.

    In addition, since the algorithm does not need to estimate too many complex water parameters in advance, we accelerate the algorithm in parallel with CUDA on the hardware. The specific acceleration effect is shown in Table 3. In the CUDA acceleration framework, we tested the computational time consumption of four different sets of resolution, and compared with the total time spent by the CPU. As shown in Table 3, the acceleration effect is very obvious. It takes only 11.15 ms when the resolution is 640×480. Time is reduced by about 30 times compared to the time running on the CPU. For images with a resolution below 1280×810, it can also reach a speed of about 30 frames per second. This algorithm has great potential for applications in underwater real-time detection systems.

    ResolutionTime/msSpeedup ratio
    CPUCUDA
    640×480324.3411.1529.09
    720×540381.6511.6032.90
    1080×7201112.0921.7951.04
    1280×8101547.4535.0144.20

    Table 3. Calculation time for different resolutions.

    Conclusions

    To address the image distortion caused by the red component attenuation, this paper proposes a simple and effective underwater image restoration algorithm based on the attenuation characteristics of different wavelength. A red channel weighted compensation model is established by analyzing the intensity information and attenuation characteristics of different channels. And the guided filtering is applied to refine the edge information of compensated red channel. In order to improve the clarity of images, the gamma correction model is used to improve the contrast. The experimental results show that the restored images are natural in color and the contrast is improved by about 2 times. The algorithm can process video with 1280×810 resolution at 30 frames per second after CUDA acceleration.

    Acknowledgements

    We are grateful for financial supports from the National Key Scientific Equipment Development Project of China (ZDYZ2013-2), the National High-Tech R&D Program of China (G128201-G158201, G128603-G158603) and the Natural Science Foundation of China (11704382), Outstanding Youth Fund of Sichuan Province (2012JQ0012).

    Author contributions

    P Yang proposed the original idea and supervised the project. W D Xiang fabricated the samples and performed the measurements. S Wang and B Xu revisited and supervised the whole process. H Liu provided the experimental video.

    Competing interests

    The authors declare no competing financial interests.

    Supplementary information

    References

    [1] A Maccarone, A Mccarthy, X Ren, RE Warburton, AM Wallace et al. Underwater depth imaging using time-correlated single-photon counting. Opt Express, 23, 33911-33926(2015).

    [2] A Galdran, D Pardo, A Picón, A Alvarez-Gila. Automatic Red-Channel underwater image restoration. J Vis Commun Image Represent, 26, 132-145(2015).

    [3] J G Guan, J P Zhu, H Tian, X Hou. Real-time polarization difference underwater imaging based on Stokes vector. Acta Phys Sin, 64, 224203(2015).

    [4] M Yang, A Sowmya. An underwater color image quality evaluation metric. IEEE Trans Image Process, 24, 6062-6071(2015).

    [5] S Serikawa, H M Lu. Underwater image dehazing using joint trilateral filter. Comput Electr Eng, 40, 41-50(2014).

    [6] Y T Peng, P C Cosman. Underwater image restoration based on image blurriness and light absorption. IEEE Trans Image Process, 26, 1579-1594(2017).

    [7] C Y Li, J C Guo, R M Cong, Y W Pang, B Wang. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans Image Process, 25, 5664-5677(2016).

    [8] R Priyadharsini, T Sree Sharmila, V Rajendran. A wavelet transform based contrast enhancement method for underwater acoustic images. Multidimens Syst Signal Process, 29, 1845-1859(2018).

    [9] B He, Y Liang, X Feng, R Nian, T H Yan et al. AUV SLAM and experiments using a mechanical scanning forward-looking sonar. Sensors, 12, 9386-9410(2012).

    [10] S Zhang, T Wang, J Y Dong, H Yu. Underwater image enhancement via extended multi-scale Retinex. Neurocomputing, 245, 1-9(2017).

    [11] K Iqbal, R Abdul Salam, A Osman, A Z Talib. Underwater image enhancement using an integrated colour model. IAENG In J of Comput Sci, 34, 239-244(2007).

    [12] IqbalKOdetayoMJamesAAbdulSalam RAbdulSalam RTalib AZ HEnhancing the low quality images using unsupervised colour correction method. In Proceedings of 2010 IEEE International Conference on Systems, Man and Cybernetics (IEEE, 2010)Proceedings of 2010 IEEE International Conference on Systems, Man and Cybernetics (IEEE, 2010); http://doi.org/10.1109/ICSMC.2010.5642311. http://doi.org/10.1109/ICSMC.2010.5642311

    [13] A S A Ghani, N A M Isa. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl Soft Comput, 27, 219-230(2015).

    [14] X W Zhao, T Jin, H Chi, S Qu. Modeling and simulation of the background light in underwater imaging under different illumination conditions. Acta Phys Sin, 64, 104201(2015).

    [15] K M He, J Sun, X O Tang. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell, 33, 2341-2353(2011).

    [16] LiC YQuoJ CPangY WChenS JWangJTalib AZ HSingle underwater image restoration by blue-green channels dehazing and red channel correction. In Proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2016)Proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2016); http://doi.org/10.1109/ICASSP.2016.7471973. http://doi.org/10.1109/ICASSP.2016.7471973

    [17] YangC YChenP YHuangC CZhuangY ZShiauY HTalib AZ H Low complexity underwater image enhancement based on dark channel prior. In Proceedings of Second International Conference on Innovations in Bio-Inspired Computing and Applications (IEEE, 2011)Proceedings of Second International Conference on Innovations in Bio-Inspired Computing and Applications (IEEE, 2011); http://doi.org/10.1109/IBICA.2011.9. http://doi.org/10.1109/IBICA.2011.9

    [18] DrewsP Jrdo NascimentoEMoraesFBotelhoSCamposMTalib AZ HTransmission estimation in underwater single images. In Proceedings of 2013 IEEE International Conference on Computer Vision Workshops (IEEE, 2013)Proceedings of 2013 IEEE International Conference on Computer Vision Workshops (IEEE, 2013); http://doi.org/10.1109/ICCVW.2013.113. http://doi.org/10.1109/ICCVW.2013.113

    [19] J Y Chiang, Y C Chen. Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans Image Process, 21, 1756-1769(2012).

    [20] YangMGongC LMoraesFBotelhoSCamposMTalib AZ HUnderwater image restoration by turbulence model based on image gradient distribution. In Proceedings of the 2nd International Conference on Uncertainty Reasoning and Knowledge Engineering (IEEE, 2012)Proceedings of the 2nd International Conference on Uncertainty Reasoning and Knowledge Engineering (IEEE, 2012); http://doi.org/10.1109/URKE.2012.6319570. http://doi.org/10.1109/URKE.2012.6319570

    [21] E Provenzi, C Gatta, M Fierro, A Rizzi. A spatially variant white-patch and gray-world method for color image enhancement driven by local contrast. IEEE Trans Pattern Anal Mach Intell, 30, 1757-1770(2008).

    [22] K M He, J Sun, X O Tang. Guided image filtering. IEEE Trans Pattern Anal Mach Intell, 35, 1397-1409(2013).

    [23] R W Jr Gould, R A Arnone, P M Martinolich. Spectral dependence of the scattering coefficient in case 1 and case 2 waters. Appl Opt, 38, 2377-2383(1999).

    [24] X W Zhao, T Jin, S Qu. Deriving inherent optical properties from background color and underwater image enhancement. Ocean Eng, 94, 163-172(2015).

    [in Chinese], [in Chinese], [in Chinese], [in Chinese], [in Chinese]. Underwater image enhancement based on red channel weighted compensation and gamma correction model[J]. Opto-Electronic Advances, 2018, 1(10): 180024
    Download Citation