1. Introduction
Light field (LF) imaging technology has shown great significance in recent years for its high-temporal-resolution 3D imaging feature through simultaneously capturing the 2D spatial and 2D angular information of light [four-dimensional (4D) LF information][1–5]. Especially, the LF imaging method based on the wave-optics model and LF point spread function (LFPSF) allows 3D deconvolution for the high-quality single-shot volumetric reconstruction[6,7]. However, in some applications of LF imaging, a scattering medium is present in the scenes, such as biological tissue, fog, and turbid water[8–16]. In these imaging scenes, signal light could still be captured, and 3D reconstruction could be conducted, but scattered light produced by the scattering medium introduces blur and scattering background artifacts to the 3D reconstruction image, which lead to low resolution and low contrast. For the LF imaging method based on 3D deconvolution, sequence recorded frames are utilized to extract ballistic light and undo the effect of scattering on the LF in real-time localization of neuronal activity[10,11], but the demand of multiple frames rather than single-shot reduced the flexibility of the method. Signal light and scattered light are separated and reconstructed separately[14], but this will increase the amount of calculation and the solving difficulty. In a general method, scattering is incorporated into the LF imaging forward model to deal with the case of mismatching between the scattering-free model and the LF with scattering, and then the scattering background artifacts of 3D reconstruction can be removed[8,9]. However, scattering introduces degradation of the LF image and produces blur patterns of LFPSF. These make the inverse problem of the 3D deconvolution more ill-conditioned and lead to a noise-sensitive result[17].