• Acta Optica Sinica
  • Vol. 40, Issue 1, 0111021 (2020)
Qi Wang1、2、3 and Yutian Fu1、2、*
Author Affiliations
  • 1Key Laboratory of Infrared System Detection and Imaging, Chinese Academy of Sciences, Shanghai 200083, China
  • 2Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
  • 3University of Chinese Academy of Sciences, Beijing 100049, China
  • show less
    DOI: 10.3788/AOS202040.0111021 Cite this Article Set citation alerts
    Qi Wang, Yutian Fu. Single-Image Refocusing Using Light Field Synthesis and Circle of Confusion Rendering[J]. Acta Optica Sinica, 2020, 40(1): 0111021 Copy Citation Text show less
    Framework of single image dynamic refocusing algorithm
    Fig. 1. Framework of single image dynamic refocusing algorithm
    Light field digital refocusing. (a) Sensor image; (b) sub-aperture image; (c) refocused image
    Fig. 2. Light field digital refocusing. (a) Sensor image; (b) sub-aperture image; (c) refocused image
    Principle of CoC rendering by gathering method
    Fig. 3. Principle of CoC rendering by gathering method
    Deep network architecture. (a) Focused stack based method; (b) parallax based method
    Fig. 4. Deep network architecture. (a) Focused stack based method; (b) parallax based method
    Comparison of monocular depth estimation results. (a) Center image; (b) disparity method; (c) focal stack method; (d) method proposed by Godard et al.[16]; (e) normalized method proposed by Cheng et al.[17](SSIM:0.756); (f) normalized disparity method (SSIM:0.823); (g) normalized focal stack method (SSIM: 0.895); (h) method proposed by Jeon et al.[18]</s
    Fig. 5. Comparison of monocular depth estimation results. (a) Center image; (b) disparity method; (c) focal stack method; (d) method proposed by Godard et al.[16]; (e) normalized method proposed by Cheng et al.[17](SSIM:0.756); (f) normalized disparity method (SSIM:0.823); (g) normalized focal stack method (SSIM: 0.895); (h) method proposed by Jeon et al.[18]
    Quantitative and qualitative comparison of light field synthesis methods. (a) Method proposed by Kalantari et al.[10]; (b) method proposed by Srinivasan et al.[13]; (c) our method; (d) ground truth center image; (e)(f)(g) rendered center images obtained by methods proposed by Kalantari et al.[10] and Srinivasan et al.[<xref ref-type="bibr" rid
    Fig. 6. Quantitative and qualitative comparison of light field synthesis methods. (a) Method proposed by Kalantari et al.[10]; (b) method proposed by Srinivasan et al.[13]; (c) our method; (d) ground truth center image; (e)(f)(g) rendered center images obtained by methods proposed by Kalantari et al.[10] and Srinivasan et al.[
    Quantitative and qualitative comparison of rendering methods. (a) Center image; (b1)--(b4) refocused images; (c) depth estimation using focal stack method; (d1)--(d4) rendering results using focal stack method; (e) depth estimation using disparity method; (f1)--(f4) rendering results using disparity method; (g)(h)(i) results of occlusion detection, focus on foreground, and focus on background obtained by method proposed by Zhang et al.[5]; (j)(k)
    Fig. 7. Quantitative and qualitative comparison of rendering methods. (a) Center image; (b1)--(b4) refocused images; (c) depth estimation using focal stack method; (d1)--(d4) rendering results using focal stack method; (e) depth estimation using disparity method; (f1)--(f4) rendering results using disparity method; (g)(h)(i) results of occlusion detection, focus on foreground, and focus on background obtained by method proposed by Zhang et al.[5]; (j)(k)
    Comparison of rendering results on different datasets. (a) Ground truth center image; (b) depth estimation; (c) refocusing on close positions; (d) refocusing on far positions
    Fig. 8. Comparison of rendering results on different datasets. (a) Ground truth center image; (b) depth estimation; (c) refocusing on close positions; (d) refocusing on far positions
    Comparison of rendering effects of real scenes with images captured by different cameras. (a) Original image; (b)--(e) refocused images at four depths rendered by our method;(f) depth map; (g)(h) images shot by dual cameras focused on two positions; (i)(j) images shot by Cannon focused on two positions
    Fig. 9. Comparison of rendering effects of real scenes with images captured by different cameras. (a) Original image; (b)--(e) refocused images at four depths rendered by our method;(f) depth map; (g)(h) images shot by dual cameras focused on two positions; (i)(j) images shot by Cannon focused on two positions
    DatasetQuantityResolutionFormat
    Stanford72014×14×540×375LFR/mat/npy
    UCSD10014×14×540×372PNG
    Flower334314×14×540×372PNG
    EPFL11814×14×552×383LFR/mat/npy
    Table 1. Parameters for light field datasets
    DatasetSSIMPSNR
    Stanford0.89730.72
    UCSD0.91232.11
    Flower0.92332.89
    EPFL0.90131.03
    Table 2. Quantitative analysis on rendering effects on light field datasets
    Qi Wang, Yutian Fu. Single-Image Refocusing Using Light Field Synthesis and Circle of Confusion Rendering[J]. Acta Optica Sinica, 2020, 40(1): 0111021
    Download Citation