• Chinese Optics Letters
  • Vol. 22, Issue 5, 050501 (2024)
Liqing Wu1、2、3, Naijie Qi1、2、3, Chengcheng Chang1、2, Hua Tao1、2, Xiaoliang He1、2, Cheng Liu1、2、*, and Jianqiang Zhu1、2、**
Author Affiliations
  • 1Key Laboratory of High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
  • 2National Laboratory on High Power Laser and Physics, China Academy of Engineering Physics, Chinese Academy of Sciences, Shanghai 201800, China
  • 3University of Chinese Academy of Sciences, Beijing 100049, China
  • show less
    DOI: 10.3788/COL202422.050501 Cite this Article Set citation alerts
    Liqing Wu, Naijie Qi, Chengcheng Chang, Hua Tao, Xiaoliang He, Cheng Liu, Jianqiang Zhu. Linear mathematical model for the unique solution of 3D ptychographic iterative engine[J]. Chinese Optics Letters, 2024, 22(5): 050501 Copy Citation Text show less

    Abstract

    Diffraction intensities of the 3D ptychographic iterative engine (3PIE) were written as a set of linear equations of the self-correlations of Fourier components of all sample slices, and an effective computing method was developed to solve these linear equations for the transmission functions of all sample slices analytically. With both theoretical analysis and numerical simulations, this study revealed the underlying physics and mathematics of 3PIE and demonstrated for the first time, to our knowledge, that 3PIE can generate mathematically unique reconstruction even with noisy data.

    1. Introduction

    Recently, computational imaging has become a research hotspot in optical field, especially phase retrieval[14]. Coherent diffraction imaging (CDI)[5,6] is a kind of phase retrieval technique using various iterative algorithms. The G–S algorithm[7,8] is the earliest CDI algorithm that records diffraction intensity at two separated planes. The error reduction (ER) algorithm[9] and Fienup’s hybrid-input–output (HIO) algorithm[10], which recorded only one frame of diffraction intensity, have much faster convergence and much better reconstruction quality than the G–S algorithm. Ptychography[11,12] was invented by Walter Hoppe to reconstruct the objects with periodic structure and has been successfully used in material inspection with X-ray and high-energy electrons[13,14]. By combining the CDI algorithm and ptychography technique together, Rodenburg proposed the ptychography iterative engine (PIE)[15] to solve problems of twin image and low convergence in the classical CDI. PIE scans samples through a localized light beam to many positions and reconstructs the complex transmission function of the sample from diffractive intensities recorded at all scanning positions. The overlap between adjacent illuminating regions in PIE greatly improves its convergence speed and reconstruction quality, and PIE has been realized with visible light[16], X-rays[17,18], high-energy electron beams[19], and terahertz waves[20,21]. While the original PIE required exactly known illumination and sample positions, good reconstruction can be achieved by the extended ptychography iterative engine (ePIE) algorithm[22] to reconstruct sample and illumination wavefront simultaneously and by annealing or cross-correlation algorithms to correct the scanning positions of the sample[23,24], greatly improving the performance of PIE and extending its applications[2527]. Applying the multislice theory of electron microscopy[28], 3D imaging can also be realized with PIE by regarding a 3D object as a series of 2D infinitely thin layers. Comparing to traditional 3D imaging methods such as optical coherent tomography[29] and magnetic resonant tomography[30], which generated intensity images, 3D ptychographic iterative engine (3PIE)[31,32] can provide a high-quality 3D phase image for a transparent volume object rapidly. While 3PIE was first demonstrated experimentally with X-rays under geometric projection approximation[33,34], it was also realized using visible light with diffraction taken into consideration[32]. Single-shot 3PIE[35,36] was also realized by recording subdiffraction patterns array with one detector exposure, making 3D phase imaging for dynamic imaging possible[37]. 3PIE has shown good performance in 3D phase imaging; however, there is still no analytical theory to explain why it can work and to illustrate whether its reconstruction has mathematic uniqueness. In experiments, we were always not sure how reconstruction accuracy was affected by the optical alignment used, hindering the further development of 3PIE. Furthermore, since the analytical relationship between recorded diffraction intensities and reconstructed images has not been found by now, we cannot do quantitative and analytical error analysis on the reconstructed phase in experiments, making it impossible for 3PIE to be applied in fields of optical measurement and optical metrology, where the mathematical uniqueness of reconstruction and analytical error analysis are very critical[38].

    To investigate the underlying physics and mathematics of 3PIE algorithm, in this study diffraction intensities were written as a linear equation set of the self-correlations of Fourier components of sample slices, and the spatial components of all sample slices can be analytically determined by solving this linear equation set. Furthermore, an effective computing method that requires only small computer memory and can solve this linear equation set speedily was proposed. The influence of noise on this proposed linear model and computing method was also considered, demonstrating that both the linear model and a speedy computing method have strong noise immunization capability, and the influence of detector noise can be effectively reduced by simply dividing all recorded intensities into groups and adding each group together. In this paper, while theoretical analysis was illustrated, numerical simulations were also carried out to verify the feasibility of the proposed model and computing method. This study proves the mathematical uniqueness of the 3PIE algorithm for the first time and puts forward a speedy computing method to get analytical reconstruction from diffraction intensities, promoting the development of 3PIE in fields of optical measurement or metrology, where a strict unique mathematic solution and quantitative error analysis are required.

    2. Theory and Method

    2.1 Theoretical analysis

    The optical alignment of 3PIE is schematically shown in Fig. 1, where the volume object is assumed to be composed of two layers, and the laser beam P(x1,y1) incident on the first layer is generated by a parallel beam passing through a tiny aperture. The interval between two object layers was assumed as d1, and their transmission functions were assumed as S1(x1,y1) and S2(x2,y2), respectively. The distance of a CCD to the second object layer is z. The light field leaving the first object layer U1(x1,y1)=P(x1,y1)S1(x1,y1) was regarded as the illumination of the second object layer after it propagated the distance of d1. The illumination arriving at the second object layer can be written as U2(x2,y2)=F1{F(P(x1,y1)S1(x1,y1))·H(u,v)}=F1{P˜(u,v)S˜1(u,v)·H(u,v)},where P˜(u,v) and S˜1(u,v) are Fourier transforms of P(x1,y1) and S1(x1,y1), respectively. H(u,v)=exp[ikd1·1(λu)2(λv)2] is the transfer function, where λ means wavelength and k is a wave vector. The symbol F and F1 are Fourier transform and inverse Fourier transform, respectively. means a 2D convolution operator. The light field arriving at the detector was the Fresnel diffraction of the transmitted light of the second layer U2(x2,y2)S2(x2,y2), and it can be written as U3(x,y)=1iλzexp(ikz)exp[ik2z(x2+y2)]U2(x2,y2)S2(x2,y2)·exp[ik2z(x22+y22)]exp[i2πλz(x2x+y2y)]dx2dy2.

    Schematic diagram of 3PIE with two slices object.

    Figure 1.Schematic diagram of 3PIE with two slices object.

    By defining S2(x2,y2)=S2(x2,y2)exp[ik2z(x22+y22)], fx=xλz and fy=yλz, Eq. (2) can be rewritten as U3(λzfx,λzfy)=1iλzexp(ikz)exp[iλzπ(fx2+fy2)]·[U˜2(fx,fy)S˜2(fx,fy)].

    The intensity of the (k,l)th pixel received by the detector can be written into discrete form as I(k,l)={u1,v1[m1,n1P˜(fm1,fn1)S˜1(fm1+fm2,fn1+fn2)]·H(fu1,fv1)S˜2(fm2+fk,fn2+fl)}·{m2,n2[m2,n2P˜(fm2,fn2)S˜1(fm1+fm2,fn1+fn2)]·H(fu2,fv2)S˜2(fm2+fk,fn2+fl)}*,where fx1=x2λd1 and fy1=y2λd1, S˜2(fx,fy) represents the Fourier transform of S2(x2,y2), and * indicates conjugation. For simple discussion, Eq. (4) can be rewritten into a compact form as I(k,l)=m2,n2m2,n2m1,n1m1,n1S˜1(fm1+fm2,fn1+fn2)·S˜1*(fm1+fm2,fn1+fn2)·S˜2(fm2+fk,fn2+fl)S˜2*(fm2+fk,fn2+fl)·H(fm2,fn2)H*(fm2,fn2)P˜(fm1,fn1)P˜*(fm1,fn1).

    In 3PIE, the illumination P˜(fm1,fn1)P˜*(fm1,fn1) and the transmission function H(fm2,fn2)H*(fm2,fn2) are known, defined as a coefficient matrix A=H(fm2,fn2)H*(fm2,fn2)·P˜(fm1,fn1)P˜*(fm1,fn1). The unknowns are each layer of 3D object xm1,n1,m2,n2m1,n1,m2,n2=m2,n2m2,n2m1,n1m1,n1S˜1(fm1+fm2,fn1+fn2)·S˜1*(fm1+fm2,fn1+fn2)S˜2(fm2+fk,fn2+fl)S˜2*(fm2+fk,n2+fl), and the intensity matrix of detector is defined as B=I(k,l)|(k=1,,K;l=1,,L). All linear equations of Eq. (5) can be written in the matrix form: AX=B, and the solution of Eq. (5) is written as X=A1B.

    Assuming that the detector records the effective intensity information of M×N pixels, then there are M×N linear equations in Eq. (6). Undoubtedly, as long as the number of equations is greater than the number of unknowns, all the xm1,n1,m2,n2m1,n1,m2,n2 can be calculated. The object information cannot be obtained directly from these computed xm1,n1,m2,n2m1,n1,m2,n2, which always include all spectral components of each layer. Then, from all xm1,n1,m2,n2m1,n1,m2,n2 we can choose specific pixel of m2=m20, n2=n20, m1=m10, n1=n10, m2=m20 and n2=n20 and pick a new vector x1 about m1,n1 as variables, x1=S˜1(fm1+fm2,fn1+fn2)[S˜1*(fm1+fm2,fn1+fn2)·S˜2(fm2+fk,fn2+fl)S˜2*(fm2+fk,n2+fl)]=Cm20,n20,m10n10,m20,n20S˜1(fm1,fn1)|m1=1,,Mn1=1,,N,where Cm20,n20,m10n10,m20,n20 is a constant with value determined by m20,n20,m10,n10,m20,n20. Physically, the light field multiplied by a constant is essentially the same as the original light field. Therefore, x1 is equal to S˜1(fm1,fn1)|m1=1,,Mn1=1,,N and the first layer S1(x1,y1) can be obtained by doing inverse Fourier transform on x1. Similarly, the second layer S2(x2,y2) can be obtained by another vector as x2=Cm20,n20,m10n10,m20,n20S˜2(fm2,fn2)|m2=1,,Mn2=1,,N.

    Since the number of unknown elements Xm1,n1,m2,n2m1,n1,m2,n2 is very huge and is much larger than the pixel number of M×N of detector in most cases, Eq. (6) cannot be solved with one frame of recorded diffraction intensity. The condition for getting a unique reconstruction is that the coefficient matrix A is of full rank. When the sample is shifted by a distance mxδx and nyδy along the x and y directions, respectively, the phase-shifting factor ei[(fm1+fm2)mxδx+(fn1+fn2)nyδy] as a known term should be multiplied to A, and more linear equations are obtained. It is easy to get A of full rank when the sample is shifted to positions with random intervals. By scanning the sample to many positions, we can get a huge linear equation group in Eq. (5), and we can compute all xm1,n1,u1,v1m2,n2,u2,v2 and corresponding S1(x1,y1) and S2(x2,y2) analytically.

    The above analysis is on a volume object composed of only two layers, but similar analysis can also be carried out on volume object composed of L layers, and the only difference lies in that the above unknown element Xm1,n1,m2,n2m1,n1,m2,n2 will be replaced by mM,,nN,,m2,n2m1,n1mM,,nN,,m2,n2m1,n1S˜1(fm1+m2,fn1+n2)·S˜1*(fm1+m2,fn1+n2)S˜2(m2+m,n2+l)·S˜2*(m2+k,n2+l).The mathematical analysis is the same as that shown in Eq. (1) to Eq. (8). If the volume object is sliced into L layers with size of M×N, there will be (MN)2L unknowns xmM,,nN,,m2,n2m1,n1mM,,nN,,m2,n2m1,n1 to be solved. The largest number of uncorrelated linear equations available from one frame diffractive patterns is MN; then, to get (MN)2L uncorrelated linear equations, the sample should be scanned to at least (MN)2LMN positions. It is obvious that with the increasing object layer number L, the required scanning positions exponentially increase. For experiments, where the number of scanning sample positions cannot be very huge because positioning error always accelerates with scanning range, a good reconstruction can be achieved with 3PIE when the sample is always sliced into a very limited number of layers or when the sample has a very limited number of spatial components; that is, L or MN always takes small values.

    2.2. Efficient computing method

    In the above mathematical analysis, to compute all (MN)2L unknown terms we need (MN)2L uncorrelated linear equations; then the size of A is (MN)2L×(MN)2L, which is an unreasonably huge number for most computer stations. Thus, it is impossible to compute all unknown terms by directly using Eq. (6).

    We can find from the above analysis that only a very small number of computed unknown terms were finally applied to reconstruct the sample slices, and it is not essential to compute all of them. Furthermore, many xmM,,nN,,m2,n2m1,n1mM,,nN,,m2,n2m1,n1 have zero values, and these terms need not be computed. Figures 2(a) and 2(b) show the amplitude transmissions of two sample layers, and Figs. 2(c) and 2(d) show the modulus of their Fourier spectrum S˜1(fm1,fn1) and S˜2(fm2,fn2) in log scale, respectively. Figure 2(e) shows S˜1(fm1+fm2,fn1+fn2)S˜1*(fm1+fm2,fn1+fn2)S˜2(fm2+fk,fn2)S˜2*(fm2+fk,fn2), where we can clearly find that most of values are very close to zeros, except pixels around the center. Thus, it is possible to find an efficient computing method without huge computer memory to calculate the sample’s transmission function.

    Amplitude transmissions and spectra of two layers.

    Figure 2.Amplitude transmissions and spectra of two layers.

    To illustrate the generation of diffraction intensity with Eq. (5) intuitively, a 3×3P˜(fm,fn), a 3×3S˜1(fm,fn), and a 3×3S˜2(fm,fn) are used in Fig. 3 to show the computation of U˜2(2,2), U˜2(2,1), U˜2(1,2), and U˜2(1,1) with Figs. 3(a)3(d), respectively, where orange grids S˜1¯(fm,fn) indicating the reversed spatial component matrix of the first layer were shifted by varying unities in the x and y directions with respect to the green grids indicating the illumination spatial components P˜(fm,fn). U˜2(2,2), U˜2(2,1), U˜2(1,2), and U˜2(1,1) can be written as Eq. (9), {U˜2(2,2)=S˜1¯(1,1)P˜(1,1)H(2,2)U˜2(2,1)=[S˜1¯(1,0)P˜(1,1)+S˜1¯(1,1)P˜(1,0)]H(2,1)U˜2(1,2)=[S˜1¯(0,1)P˜(1,1)+S˜1¯(1,1)P˜(0,1)]H(1,2)U˜2(1,1)=[S˜1¯(0,0)P˜(1,1)+S˜1¯(0,1)P˜(1,0)+S˜1¯(1,0)P˜(0,1)+S˜1¯(1,1)P˜(0,0)]H(1,1).

    Formation of U˜2(fm,fn) on varying pixels.

    Figure 3.Formation of U˜2(fm,fn) on varying pixels.

    Similarly, Figs. 4(a)4(d) illustrate the formation of U˜3(3,3), U˜3(3,2), U˜3(2,3), and U˜3(2,2), respectively, where S˜2¯(fm,fn) in light pink indicating the reversed spatial component matrix of the second layer was shifted by varying unities in the x and y directions with respect to the blue grids indicating the illumination spatial components U˜2(fm,fn); they can be written as {U˜3(3,3)=S˜2¯(1,1)U˜2(2,2)U˜3(3,2)=S˜2¯(1,1)U˜2(2,1)+S˜2¯(1,0)U˜2(2,2)U˜3(2,3)=S˜2¯(0,1)U˜2(2,2)+S˜2¯(1,1)U˜2(1,2)U˜3(2,2)=S˜2¯(0,0)U˜2(2,2)+S˜2¯(0,1)U˜2(2,1)+S˜2¯(1,0)U˜2(1,2)+S˜2¯(1,1)U˜2(1,1),{I(3,3)=|S˜2¯(1,1)U˜2(2,2)|2I(3,2)=|S˜2¯(1,1)U˜2(2,1)|2+|S˜2¯(1,0)U˜2(2,2)|2+U˜2(2,1)S˜2¯(1,1)U˜2*(2,2)S˜2¯*(1,0)+U˜2(2,2)S˜2¯(1,0)U˜2*(2,1)S˜2¯*(1,1)I(2,3)=|S˜2¯(0,1)U˜2(2,2)|2+|S˜2¯(1,1)U˜2(1,2)|2+U˜2(2,2)S˜2¯(0,1)U˜2*(1,2)S˜2¯*(1,1)+U˜2(1,2)S˜2¯(1,1)U˜2*(2,2)S˜2¯*(0,1)I(2,2)=|S˜2¯(0,0)U˜2(2,2)|2+|S˜2¯(0,1)U˜2(2,1)|2+|S˜2¯(1,0)U˜2(1,2)|2+|S˜2¯(1,1)U˜2(1,1)|2+S˜2¯(0,0)U˜2(2,2)U˜2*(2,1)S˜2¯*(0,1)+S˜2¯(0,0)U˜2(2,2)U˜2*(1,2)S˜2¯*(1,0)+S˜2¯(0,0)U˜2(2,2)U˜2*(1,1)S˜2¯*(1,1)+S˜2¯(0,1)U˜2(2,1)U˜2*(2,2)S˜2¯*(0,0)+S˜2¯(0,1)U˜2(2,1)U˜2*(1,2)S˜2¯*(1,0)+S˜2¯(0,1)U˜2(2,1)U˜2*(1,1)S˜2¯*(1,1)+S˜2¯(1,0)U˜2(1,2)U˜2*(2,2)S˜2¯*(0,0)+S˜2¯(1,0)U˜2(1,2)U˜2*(2,1)S˜2¯*(0,1)+S˜2¯(1,0)U˜2(1,2)U˜2*(1,1)S˜2¯*(1,1)+S˜2¯(1,1)U˜2(1,1)U˜2*(2,2)S˜2¯*(0,0)+S˜2¯(1,1)U˜2(1,1)U˜2*(2,1)S˜2¯*(0,1)+S˜2¯(1,1)U˜2(1,1)U˜2*(1,2)S˜2¯*(1,0).

    Formation of U˜3(fm,fn) on varying pixels.

    Figure 4.Formation of U˜3(fm,fn) on varying pixels.

    When multiplied by a constant in spatial domain or spatial frequency domain, the sample’s transmission function does not change essentially, and for simplicity we can assume S˜2¯(1,1) has a value of 1.0 or other given values without losing generality. Then U˜2(2,2) can be computed as U˜2(2,2)=I3(3,3) with the first equation of Eq. (12), and then S˜1¯(1,1) can be determined as S˜1¯(1,1)=I3(3,3)P˜(1,1)H(2,2) with the first equation of Eq. (9). According to Figs. 3(b) and 4(b), S˜2¯(1,0) and S˜1¯(1,0) can be computed using the computed S˜2¯(1,1) and S˜1¯(1,1). For clarity, we defined S˜1¯(1,0)=x, S˜2¯(1,0)=y, and the intensity It(3,2) at the (3,2)th pixel when the sample was scanned to the tth position can be written as Eq. (12), {It(3,2)=|A1y|2+|A2x|2+|A3|2+A1A2*yx*+A1A3*y+A2A1*xy*+A2A3*x+A3A1*y*+A3A2*x*A1=S˜1¯(1,1)eiΔtP˜(1,1)H(2,2)eiεtA2=S˜2¯(1,1)eiθtP˜(1,1)H(2,1)eiδtA3=S˜2¯(1,1)eiθtS˜1¯(1,1)eiΔtP˜(1,0)H(2,1),where eiΔt, eiεt, eiθt and eiδt are additional phase factors of S˜1¯(1,1), S˜2¯(1,0), S˜2¯(1,1), and S˜1¯(1,0) caused by the tth shifting of the sample, respectively. Since |A1y|2,|A2x|2,|A3|2 do not change with the sample’s positions, these three terms can be eliminated by subtracting I1(3,2) from It1(3,2)|t1=2,3,4,5,6,7 to yield six linear equations. Then, with these six linear equations, we can compute six unknowns: {yx*,y,xy*,x,y*,x*} of the Eq. (12); then the values of S˜2¯(1,0) and S˜1¯(1,0) are computed. With the same strategy, the values of S˜1¯(0,0) and S˜2¯(0,0) can be computed in the next step, and all other spatial components of two sample layers can be computed in the same way. The transmission functions of the two sample layers can be computed by doing inverse Fourier transform on all computed S˜1(fm1,fn1) and S˜2(fm2,fn2). To be suitable for large matrix objects, the point-by-point calculation takes a certain amount of time. However, when calculating objects with large layers, compared with the traditional 3PIE algorithm, which needs to wait for convergence, this method can compute each layer at the same time without extra time cost.

    With the above computing method, two spatial components of sample slices can be computed in each step using seven linear equations, and the computer memory required was very small. Then solving Eq. (7) becomes quite easier than directly using Eq. (6). A two-layer sample was used as an example in the above analysis, and the transmission function of the volume object composed of many layers can also be computed in a similar way. When the sample was composed of three layers, four layers, and L layers, the number of diffraction patterns required will be 13, 21, and L2+L+1, respectively.

    3. Numerical Simulations

    To check the feasibility of the above theoretical analysis and proposed computing method, a series of numerical simulations were carried out. Two biological images of 512×512 pixels shown in Figs. 5(a) and 5(b) were used as amplitude transmissions of two layers of a volume sample. Two images shown in Figs. 5(c) and 5(d) were used as phase retardations of two layers, respectively. The interval between two layers was assumed as 1 mm. The probe light illuminating on the sample was a parallel laser beam of 632.8 nm passing through an aperture 0.7 mm in radius, and the distance of this aperture from the sample was 30 mm, equal to the distance from the sample to the detector. The amplitude and phase of illumination are shown in Figs. 5(e) and 5(f), respectively. The strength of the Fourier components of two sample layers and the illumination are shown in Figs. 5(g), 5(h) and 5(i) in log scale, respectively. When the sample was shifted by distances of (450 µm, 450 µm), (446 µm, 900 µm), (450 µm, 1320 µm), (890 µm, 450 µm), (905 µm, 920 µm), (900 m, 1361 µm), and (1330 µm, 450 µm), seven frames of diffraction patterns shown in Fig. 6 can be obtained. The pixel size of the detector was assumed to be 9 µm.

    Object and illumination. (a) and (b) are the amplitude of two layers of object; (c) and (d) are the corresponding phase of two layers of object; (e) and (f) are the amplitude and phase of illumination; the spectra in log scale of two layers object and illumination are shown in (g)–(i), respectively. The scale bar of (a) is suitable for (b)–(d) and (g)–(i).

    Figure 5.Object and illumination. (a) and (b) are the amplitude of two layers of object; (c) and (d) are the corresponding phase of two layers of object; (e) and (f) are the amplitude and phase of illumination; the spectra in log scale of two layers object and illumination are shown in (g)–(i), respectively. The scale bar of (a) is suitable for (b)–(d) and (g)–(i).

    Seven diffraction patterns used in computation.

    Figure 6.Seven diffraction patterns used in computation.

    With the computing method discussed above, the strengths of the Fourier components of two sample layers computed with Eqs. (9)–(12) are shown in Figs. 7(a) and 7(b) in log scale, respectively. For quantitative comparison, the differences between the reconstructed images and the original images were calculated based on the formula: error=||Ore||Oorigin||, shown in Figs. 7(c) and 7(d). We can find that the difference is around 0.05%, which is the computing accuracy of a common desktop of 32 bits.

    Reconstructed spectra of two layers. (a) and (b) represent the recovered spectra of two layers, and (c) and (d) depict the difference between reconstructed spectrum and original spectrum.

    Figure 7.Reconstructed spectra of two layers. (a) and (b) represent the recovered spectra of two layers, and (c) and (d) depict the difference between reconstructed spectrum and original spectrum.

    By doing inverse Fourier transform on computed Fourier components in Fig. 7, we get the modulus and phase of two sample layers, shown in Figs. 8(a)8(d). The differences of computed modulus and phase to their original values are shown in Figs. 8(e)8(h), which are all at the scale of about 102, and are the computing accuracy of a common workstation of 32 bits. Results in Figs. 7 and 8 perfectly match our theoretical expectations of Eqs. (10)–(12), proving the correctness of the above theoretical analysis and suggested computing methods.

    Reconstruction of two layers object. (a)–(d) are the amplitudes and phases of two layers object; (e)–(h) are the differences of modulus and phases to their original values.

    Figure 8.Reconstruction of two layers object. (a)–(d) are the amplitudes and phases of two layers object; (e)–(h) are the differences of modulus and phases to their original values.

    In the above studies, we did not touch experimental factors, including detector noises, which are a kind of inevitable error source of PIE experiments. If there is random noise ΔB in the diffraction intensity, the linear equation set will become AX=B+ΔB. Then, spatial components of sample slices cannot be accurately computed by directly using noisy diffraction intensities. If the sample was scanned at many positions to record a large enough number of diffraction intensities, we can add a large number of linear equations corresponding to the same detector pixel together as A1X+A2X++ANX=B1+B2++BN+ΔB1+ΔB2++ΔBN.

    When N is large enough, 1NΔBm will become close to zero, and Eq. (13) will become 1NAmX=1NBm. Then, X, without the influence of detector noise, can also be computed as X=(1NAm)11NBm. That means that, by shifting sample to more positions and recording more diffraction patterns, we can remarkably suppress the influence of detector noise and get accurate reconstruction for 3PIE with the above illustrated analytical method. As the approximation holds only when ΔB takes a small value, this method cannot be available when external noises are too large.

    To verify the robustness of this anti-noise computing method, another set of simulations was carried out by adding Poisson noise to diffractive intensities, shown in Fig. 6. The sample was shifted by 7×7 positions, yielding 49 frames of diffraction patterns, and Poisson noise [Fig. 9(a)] with strength between 1 and 2 was added to each pattern, resulting in the 20 dB signal-to-noise ratio (SNR). These noisy intensities are divided into seven groups; after diffraction patterns in each group are summed up, seven frames of new hybrid diffractive intensities, shown in Figs. 9(b)9(h), are obtained.

    Noise and seven new diffraction patterns. (a) is the Poisson noise. (b)–(h) are seven new hybrid diffraction patterns.

    Figure 9.Noise and seven new diffraction patterns. (a) is the Poisson noise. (b)–(h) are seven new hybrid diffraction patterns.

    With our suggested computing method, the spatial components of two sample layers were computed from hybrid diffraction patterns in Fig. (10), where Figs. 10(a) and 10(b) are the modulus of computed spectral components of the two sample layers in log scale. By doing inverse Fourier transform, the complex transmission function of the two sample layers can be obtained. Figures 10(c) and 10(d) are the modulus of the two layers, and Figs. 10(e) and 10(f) are corresponding phases. Figures 10(g) and 10(h) show the differences between the recovered modulus and their corresponding original values, respectively. The maximum difference is about 0.5%, which matches our expectation well that the influence of noise can be effectively eliminated by shifting the sample to more positions and recording more diffraction intensities.

    Results reconstructed from noisy data. (a) and (b) are the recovered spectra in log scale; (c) and (d) are the modulus of the reconstructed two-layer object; the corresponding phase is shown in (e) and (f); amplitude differences to original image are shown in (g) and (h).

    Figure 10.Results reconstructed from noisy data. (a) and (b) are the recovered spectra in log scale; (c) and (d) are the modulus of the reconstructed two-layer object; the corresponding phase is shown in (e) and (f); amplitude differences to original image are shown in (g) and (h).

    4. Conclusions

    The underlying physical mechanism and mathematics of the 3PIE imaging method was revealed by writing its diffraction intensities as a linear equation set. The spatial components of all sample slices can be analytically determined using an efficient computing method to solve this linear equation set. The robustness of this suggested computing method in dealing with noisy data was also studied, and it was demonstrated that the influence of detector noise can be effectively eliminated by simply dividing many recorded intensities into groups and summing each group up. While theoretical analysis was illustrated, numerical simulations were also carried out to verify the feasibility of the proposed model and computing method by taking a two-slice thick sample as an example. This study clarified the mathematical uniqueness of the 3PIE algorithm for the first time and suggested a speedy computing method to get analytical reconstruction from diffraction intensity, breaking the theoretical bottleneck that hinders the application of 3PIE in fields of optical measurement or metrology, where mathematical uniqueness and error analysis are very crucial.

    References

    [1] J. X. Li, T. Y. Gan, B. J. Bai et al. Massively parallel universal linear transformations using a wavelength-multiplexed diffractive optical network. Adv. Photonics, 5, 016003(2023).

    [2] L. P. Lu, J. J. Li, Y. F. Shu et al. Hybrid brightfield and darkfield transport of intensity approach for high-throughput quantitative phase microscopy. Adv. Photonics, 4, 056002(2022).

    [3] Y. M. Xu, X. C. Pan, M. Sun et al. Single-shot ultrafast multiplexed coherent diffraction imaging. Photonics Res., 10, 1937(2022).

    [4] A. Saba, C. Gigli, A. B. Ayoub et al. Physics-informed neural networks for diffraction tomography. Adv. Photonics, 4, 066001(2022).

    [5] J. W. Miao, P. Charalambous, J. Kirz et al. Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens. Nature, 400, 342(1999).

    [6] J. M. Miao, T. Ishikawa, I. K. Robinson et al. Beyond crystallography: diffractive imaging using coherent X-ray light sources. Science, 348, 530(2015).

    [7] R. W. Gerchberg. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik, 35, 237(1972).

    [8] W. O. Saxton, J. M. Cowley. Computer techniques for image processing in electron microscopy. Phys. Today, 32, 74(1979).

    [9] J. R. Fienup. Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett., 3, 27(1978).

    [10] J. R. Fienup. Phase retrieval algorithms: a comparison. Appl. Opt., 21, 2758(1982).

    [11] R. Hegerl, W. Hoppe. Phase evaluation in generalized diffraction (ptychography). Proceedings of the Fifth European Congress Electron Microscopy(1972).

    [12] J. M. Rodenburg. Ptychography and related diffractive imaging methods. Adv. Imag. Elect. Phys., 150, 87(2008).

    [13] P. D. Nellist, B. C. McCallum, J. M. Rodenburg. Resolution beyond the ‘information limit’ in transmission electron microscopy. Nature, 374, 630(1995).

    [14] H. N. Chapman. Phase-retrieval X-ray microscopy by Wigner-distribution deconvolution. Ultramicroscopy, 66, 153(1996).

    [15] J. M. Rodenburg, H. M. L. Faulkner. A phase retrieval algorithm for shifting illumination. Appl. Phys. Lett., 85, 4795(2004).

    [16] J. M. Rodenburg, A. C. Hurst, A. G. Cullis. Transmission microscopy without lenses for objects of unlimited size. Ultramicroscopy, 107, 227(2007).

    [17] P. Thibault, M. Dierolf, A. Menzel et al. High-resolution scanning X-ray diffraction microscopy. Science, 321, 379(2008).

    [18] J. M. Rodenburg, A. C. Hurst, A. G. Cullis et al. Hard X-ray lensless imaging of extended objects. Phys. Rev. Lett., 98, 034801(2007).

    [19] S. Cao, P. Kok, P. Li et al. Modal decomposition of a propagating matter wave via electron ptychography. Phys. Rev. A, 94, 063621(2016).

    [20] L. Valzania, T. Feurer, P. Zolliker et al. Terahertz ptychography. Opt. Lett., 43, 543(2018).

    [21] L. Rong, F. R. Tan, D. Y. Wang et al. High-resolution terahertz ptychography using divergent illumination and extrapolation algorithm. Opt. Laser Eng., 147, 106729(2021).

    [22] A. M. Maiden, J. M. Rodenburg. An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy, 109, 1256(2009).

    [23] A. M. Maiden, M. J. Humphry, M. C. Sarahan et al. An annealing algorithm to correct positioning errors in ptychography. Ultramicroscopy, 120, 64(2012).

    [24] F. C. Zhang, I. Peterson, J. Vila-Comamala et al. Translation position determination in ptychographic coherent diffraction imaging. Opt. Express, 21, 13592(2013).

    [25] J. T. Dou, J. C. Wu, Y. M. Zhang et al. Accelerated convergence extended ptychographical iterative engine using multiple axial intensity constraints. Opt. Express, 28, 3587(2020).

    [26] L. Loetgering, M. Q. Du, K. S. E. Eikema et al. Zpie: an autofocusing algorithm for ptychography. Opt. Lett., 45, 2030(2020).

    [27] M. Pham, A. Rana, J. W. Miao et al. Semi-implicit relaxed Douglas-Rachford algorithm (sDR) for ptychography. Opt. Express., 27, 31246(2019).

    [28] J. M. Cowley, A. F. Moodie. The scattering of electrons by atoms and crystals. 1. A new theoretical approach. Acta Crystallogr., 10, 609(1957).

    [29] J. Kweon, S. J. Kang, Y. H. Kim et al. Impact of coronary lumen reconstruction on the estimation of endothelial shear stress: in vivo comparison of three-dimensional quantitative coronary angiography and three-dimensional fusion combining optical coherent tomography. Eur. Heart J. Card. Img., 19, 1134(2018).

    [30] M. Engel, L. Kasper, B. Wilm et al. T-Hex: tilted hexagonal grids for rapid 3D imaging. Magn. Reson. Med., 85, 2507(2021).

    [31] J. M. Rodenburg, A. M. Maiden, M. J. Humphry. Improvements in three dimensional imaging.

    [32] A. M. Maiden, M. J. Humphry, J. M. Rodenburg. Ptychographic transmission microscopy in three dimensions using a multi-slice approach. J. Opt. Soc. Am. A, 29, 1606(2012).

    [33] A. Suzuki, S. Furutaku, K. Shimomura. High-resolution multi-slice X-ray ptychography of extended thick objects. Phys. Rev. Lett., 112, 053903(2014).

    [34] K. Shimomura, A. Suzuki, M. Hirose et al. Precession X-ray ptychography with multislice approach. Phys. Rev. B, 91, 214114(2015).

    [35] L. Tian, L. Waller. 3D intensity and phase imaging from light field measurements in an LED array microscope. Optica, 2, 104(2015).

    [36] D. Goldberger, J. Barolak, C. G. Durfee et al. Three-dimensional single-shot ptychography. Opt. Express, 28, 18887(2020).

    [37] C. C. Chang, X. C. Pan, H. Tao et al. 3D single-shot ptychography with highly tilted illuminations. Opt. Express, 29, 30878(2021).

    [38] X. L. He, X. C. Pan, H. Tao et al. Generalized deterministic linear model for coherent diffractive imaging. AIP. Adv., 12, 065225(2022).

    Liqing Wu, Naijie Qi, Chengcheng Chang, Hua Tao, Xiaoliang He, Cheng Liu, Jianqiang Zhu. Linear mathematical model for the unique solution of 3D ptychographic iterative engine[J]. Chinese Optics Letters, 2024, 22(5): 050501
    Download Citation