• Laser & Optoelectronics Progress
  • Vol. 59, Issue 10, 1010006 (2022)
Xianzhen Sang**, Hongtai Zhu*, Hu Cheng, and Ye Zhang
Author Affiliations
  • The 58th Research Institute of China Electronics Technology Group Corporation, Wuxi 214072, Jiangsu , China
  • show less
    DOI: 10.3788/LOP202259.1010006 Cite this Article Set citation alerts
    Xianzhen Sang, Hongtai Zhu, Hu Cheng, Ye Zhang. Fast Image Defogging Method Based on Dark Channel and Global Estimation[J]. Laser & Optoelectronics Progress, 2022, 59(10): 1010006 Copy Citation Text show less

    Abstract

    A fast image defogging method combining dark channel and global estimation is proposed to address the problems of inaccurate estimation of transmittance value, halo effect at the abrupt change in scene depth of field, color distortion in the sky region, and poor real-time performance of the algorithm in the dark channel prior method. Firstly, the minimum intensity values of the R, G, and B color channels of each pixel in the image are obtained. The atmospheric scattering model combined with the global estimation method is used to obtain the transmittance value without block effect using a simple and fast linear model. The transmittance obtained by the dark channel prior method is then linearly fused with it. The transmitted value is then adaptively adjusted according to the characteristics of the foggy image to improve the accuracy of the transmittance estimation. Finally, the defogged image is restored by combining the atmospheric scattering model. The experimental results show that the transmittance value obtained via the proposed method is more accurate, which can effectively restore the details of the image and avoid the halo effect and color distortion. Simultaneously, the algorithm's processing speed is faster, making it easier to meet the requirements of a high-resolution visual system.
    Xianzhen Sang, Hongtai Zhu, Hu Cheng, Ye Zhang. Fast Image Defogging Method Based on Dark Channel and Global Estimation[J]. Laser & Optoelectronics Progress, 2022, 59(10): 1010006
    Download Citation