• Photonics Research
  • Vol. 13, Issue 6, 1699 (2025)
Christoph Stockinger1,2, Jörg S. Eismann1,2, Natale Pruiti3, Marc Sorel3, and Peter Banzer1,2,*
Author Affiliations
  • 1Institute of Physics, University of Graz, NAWI Graz, 8010 Graz, Austria
  • 2Christian Doppler Laboratory for Structured Matter Based Sensing, 8010 Graz, Austria
  • 3University of Glasgow, Glasgow G12 8LT, UK
  • show less
    DOI: 10.1364/PRJ.553590 Cite this Article Set citation alerts
    Christoph Stockinger, Jörg S. Eismann, Natale Pruiti, Marc Sorel, Peter Banzer, "Passive silicon nitride integrated photonics for spatial intensity and phase sensing of visible light," Photonics Res. 13, 1699 (2025) Copy Citation Text show less

    Abstract

    Phase is an intrinsic property of light, and thus a crucial parameter across numerous applications in modern optics. Various methods exist for measuring the phase of light, each presenting challenges and limitations—from the mechanical stability requirements of free-space interferometers to the computational complexity usually associated with methods based on spatial light modulators. Here, we utilize a passive photonic integrated circuit to spatially probe phase and intensity distributions of free-space light beams. Phase information is encoded into intensity through a set of passive on-chip interferometers, allowing conventional detectors to retrieve the phase profile of light through single-shot intensity measurements. Furthermore, we use silicon nitride as a material platform for the waveguide architecture, facilitating multi-spectral utilization in the visible spectral range. Our approach for fast, multi-spectral, and spatially resolved measurement of intensity and phase enables a wide variety of potential applications, ranging from microscopy to free-space optical communication.

    1. INTRODUCTION

    Light is one of the key ingredients in the evolution of modern technology. An important contribution to this progress is made by the field of integrated photonics, which is currently undergoing rapid development [1]. Integrated photonics offers numerous substantial advantages, first and foremost its immense potential for miniaturization, cost-effectiveness at large scales, and the ability to integrate complex optical functionalities on a single chip [2,3]. These developments have led to the widespread adoption of integrated photonics in various applications, including light sensing. In this regard, one key aspect of interest is phase-sensitive detection, which is also the central focus of this manuscript. Phase-sensitive detection of light has many applications, including but not limited to methods of microscopy [4] such as optical coherence tomography [5], optical communication [6,7], and the characterization of optical elements [8] ranging from simple contact lenses to cutting-edge high-NA microscope objectives [9] and EUV-lithography optics [10,11].

    With a few exceptions, such as Shack-Hartmann wavefront sensors [12], optical phase measurements predominantly rely on interferometry [13]. Methods are generally classified as either reference-free or reference-based, with the latter requiring an external reference signal that is coherent with the light being measured [14,15]. While reference-based methods offer benefits, they are not always applicable, and a detailed comparison is beyond the scope of this manuscript. We will focus exclusively on external-reference-free methods. In recent years, various integrated-photonics-based approaches have been developed for phase-resolved detection without the need for an external reference. One approach utilizes a tree-like mesh structure of Mach-Zehnder interferometers, operable by power minimization [16]—easy to implement but requiring precise design specifications. Alternatively, the photonic mesh throughput can be analyzed numerically [17], accommodating imperfect optical elements at the cost of computationally expensive data evaluation. While both are promising, their sequential measuring routine is limited to light fields with slow temporal variations. Other methods use a pairwise measurement scheme, which reduces complexity and enables fast readout times. For instance, Ref. [18] describes a narrow-band phase-only detection scheme. Notably, all methods described above were realized in the near-infrared spectral range. While integrated photonics for visible light has existed for a long time, it faces challenges, particularly with tunable phase shifters [19]. In addition, only recent advancements in reducing waveguide losses have made high-performance, large-scale photonic integrated circuits operating in the visible spectral range feasible [20,21].

    In this manuscript, we propose and experimentally verify a passive silicon nitride (SiN) photonic integrated circuit for the phase-resolved detection of visible light. Fundamentally, these circuits utilize a fixed set of on-chip interferometers, whose output-intensity-only measurements allow retrieving the intensity and relative phase of the incident light. The photonic chips route the light in a fully passive manner, eliminating the need for complex control electronics and enabling true single-shot measurements, with speed being only limited by the detector measuring the output intensities of the chip. The data evaluation relies on a versatile calibration procedure that can even handle large deviations of chip elements from their design parameters, including the passive phase shifters. As a result, even though every element of the chip is subject to chromatic dispersion, it can be accounted for through calibration, making the approach suitable for application across different wavelengths within the visible spectrum. Furthermore, the method employs a pairwise external-reference-free measurement scheme, offering the potential for scaling to larger detection arrays.

    2. SENSOR DESIGN AND MEASUREMENT PRINCIPLE

    We start by introducing the actual integrated photonic sensor layout and the underlying design and measurement principle. A photonic chip with two inputs is shown in Fig. 1 (inputs marked by red circles). This chip was designed such that it ultimately enables retrieval of intensity and relative phase of a light field illuminating the input interface, by measuring only the intensities at the outputs (highlighted in green in Fig. 1) of the photonic circuit. The input and output free-space-to-chip interface is realized via standard grating couplers [22]. At the input, these gratings couple the y-polarized component of an incident free-space light field to a fundamental TE waveguide mode. At the output, the gratings convert the waveguide mode back into a free-space propagating light field, which can then be measured by external detectors.

    Optical microscopy image of the chip. Free-space light is coupled into waveguides by means of two grating couplers. Subsequently, the signal is processed by passive on-chip interferometers. The on-chip interferometers consist of passive phase shifters and Y-branch combiners. In each interferometer, the phase shifters introduce a fixed phase difference between the waveguide modes. This is achieved by asymmetrically varying the waveguide widths in the two waveguides before the modes are combined in the Y-branch. Finally, the processed light is coupled out of the chip via grating couplers again. To simplify experiments, the chip layout was designed with some distances intentionally increased, resulting in a total footprint of 3285 μm×325 μm.

    Figure 1.Optical microscopy image of the chip. Free-space light is coupled into waveguides by means of two grating couplers. Subsequently, the signal is processed by passive on-chip interferometers. The on-chip interferometers consist of passive phase shifters and Y-branch combiners. In each interferometer, the phase shifters introduce a fixed phase difference between the waveguide modes. This is achieved by asymmetrically varying the waveguide widths in the two waveguides before the modes are combined in the Y-branch. Finally, the processed light is coupled out of the chip via grating couplers again. To simplify experiments, the chip layout was designed with some distances intentionally increased, resulting in a total footprint of 3285  μm×325  μm.

    On the chip, each input signal propagating along a waveguide connected to the input grating coupler is evenly split and routed to four separate waveguides using Y-branch splitters [23]. One of the four waveguides per input is then routed directly to an output (labeled out1 and out2 in Fig. 1). This output directly provides information on the intensity of the light at the corresponding input. The remaining waveguides are connected in pairs using a Y-branch combiner, leading to outputs labeled out3, out4, and out5. Two of these pairs additionally pass through a passive phase shifter before being eventually combined, which introduces a relative phase difference by altering the width of the waveguides [24]. Together with the Y-branch combiners, these phase shifters and waveguide pairs form passive sampling interferometers. The outputs of these interferometers enable the calculation of the relative phase of the input light.

    As previously mentioned, the intensities can easily be determined directly from the outputs out1 and out2, as these signals are linearly proportional to the input intensities. Obtaining the phase information, however, is more complex and requires a clear understanding of how the on-chip interferometers work. To gain the desired understanding, it is instructive to first consider a theoretical model that connects the electric fields at the inputs of the chip structure with the electric fields at its outputs. The equations connecting the output to the input fields read E1out=t11E1in,E2out=t22E2in,Ejout=t1jE1in+t2jE2in=A1jeiα1j+A2jeiα2j,for  j=3,4,5,where E represents the complex-valued electric field amplitude of the waveguide mode, while the complex proportionality coefficients tij link the field at input i to the field at output j. The amplitude and the relative phases of the proportionality coefficients are determined using a calibration method, as detailed in Appendix A. Furthermore, in Eq. (3), we perform a substitution to separate the complex-valued variables into real-valued amplitude values Aij=|tij||Eiin| and their corresponding phase αij=τij+ϕiin, with τij=angle(tij) and ϕiin=angle(Eiin). The modulus squared of Eq. (3) yields a well-known equation in interferometry that clearly illustrates how the output intensity is modulated by the phase [13]: Ijout=A1j2+A2j2+2A1jA2jcos(α1jα2j).

    We can now rearrange this equation to obtain an expression for the relative phase: α1jα2j=±arccos[Ijout(A1j2+A2j2)2A1jA2j]+2πn,with α1jα2j=τ1jτ2j+ϕ1inϕ2in, and n an integer number. We drop the term 2πn, since the 2π ambiguity is a common issue for interferometric phase sensors and remains unresolved in our system as well. Equation (5) shows that the relative phase α1jα2j can be computed, if the output intensity of the interferometers and the amplitude factors Aij are known. The values of Aij can be derived using Eqs. (1) and (2), along with the intensities at outputs out1 and out2, given the proportionality I|E|2. As mentioned earlier, the coefficients tij are determined through a calibration process. This calibration procedure does not determine the exact phase of the individual coefficients; it only provides information on their relative phase. However, since Eq. (5) exclusively depends on the relative phase of the coefficients, this information is sufficient.

    The final challenge we need to address in determining the phase of the free-space light from the output signals is the fact that Eq. (5) provides two possible solutions. To identify the correct sign of the inverse trigonometric function, additional measurements need to be performed with a known relative phase shift applied to the input signals of the interferometers. In our case, this is done through the use of multiple interferometers with fixed phase shifters. The correct solution can then be found as the one that is consistent across the different interferometers. This phase reconstruction technique is often used in signal processing and is known as I/Q—or in-phase and quadrature technique [25]. Theoretically, it would suffice to have two interferometers with a non-zero difference in their preceding phase shifts. We, however, opt for three interferometers, as this approach adds redundancy to the system and enhances measurement accuracy. Furthermore, the phase shifters are designed to introduce a relative phase difference of Δϕ1ps=0, Δϕ2ps=π/2, and Δϕ3ps=3π/4 (see Appendix C for design details). The design values for the phase shifters ensure balanced sensitivity of the device across all possible phase scenarios. However, the phase shifts introduced on the chip can significantly deviate from their design values. Therefore, the actual phase shifts are determined through calibration and are described in terms of the relative phases of the proportionality coefficients, as explained in Appendix A.

    3. SETUP

    To investigate the proposed photonic structure, an experimental setup is required that allows for controlled illumination of the input section of the photonic circuit while simultaneously monitoring the intensity of the out-coupled light at the output section. A schematic of the key components of the experimental setup is shown in Fig. 2.

    Illustration of the experimental setup. A Gaussian beam is weakly focused on the input section of the chip structure. The light is coupled to waveguide modes and subsequently processed by the on-chip architecture. The transmitted intensities of the outputs are monitored by means of an imaging system, which consists of a camera and an objective.

    Figure 2.Illustration of the experimental setup. A Gaussian beam is weakly focused on the input section of the chip structure. The light is coupled to waveguide modes and subsequently processed by the on-chip architecture. The transmitted intensities of the outputs are monitored by means of an imaging system, which consists of a camera and an objective.

    For the experiments, we analyze light emitted by a fiber coupled laser diode (center wavelength of λ=658  nm). To maximize the efficiency of the grating couplers, which are designed to solely couple the y-polarized component of the incident light field, the polarization state is adjusted using a half-wave plate and a linear polarizer. Subsequently, a lens of 400 mm focal length is placed, which weakly focuses the light onto the input region of the chip. The lens can be moved along the optical axis, enabling control of the beam’s size and wavefront curvature at the chip position. The chip is mounted on a four-axis stage, allowing linear movement in three dimensions as well as adjustment of the angle of incidence of the beam with respect to the chip plane. The beam impinges on the free-space interface at an angle of 12 deg with respect to the surface normal, which is the angle of incidence the grating couplers are designed for. The output section of the chip is imaged onto a camera by means of an imaging system. This allows for off-chip monitoring of the chip’s output intensity.

    4. RESULTS AND DISCUSSION

    After successfully calibrating the photonic chip using the procedure described in Appendix A, it can be used to measure the intensity and phase of unknown free-space light fields that impinge on the input grating couplers of the system. This solely requires a single intensity measurement at the outputs of the chip structure. The relative intensity at the two inputs is directly obtained using Eqs. (1) and (2). To determine the relative phase, Eq. (5) is used. As previously mentioned, each interferometer provides two solutions. Theoretically, one could now search for a common solution for all interferometers. In experiments, however, it is not realistic to obtain exactly the same solution at the different interferometers. Instead, one searches for the solutions of the interferometers that are closest to each other, e.g., by selecting the combination of retrieved phase values that produces the smallest standard deviation. The average of the selected phase values of the individual interferometers is finally used as the measured relative phase.

    To demonstrate the intensity and phase measurement, we scan the chip through weakly focused Gaussian beams of different parameters. These scan measurements are very well suited to illustrate the function of the sensor, since both the relative phase and the intensity of the input signals change for different positions of the beam. The output intensities of the chip are recorded at each scan position individually. From the recorded output signals, the intensity and relative phases at the inputs are determined. Figure 3(a) shows a scan measurement of a Gaussian beam featuring a 1/e2 radius of w=0.35  mm and a phase front curvature radius of R=140  mm at the chip surface. The relative phase and intensity values are plotted as a function of the relative shift of the incident beam with respect to the center of the input region of the chip. The theoretical values were derived from beam parameters obtained by fitting the output intensity data from the complete scan measurement. The measured intensity reveals, as expected, the very familiar Gaussian shape, while the measured relative phase, however, is more difficult to interpret. The relative phase of two points in space can be understood as the spatial gradient (derivative) of the phase distribution of the light beam. Neglecting the propagation term and the Gouy phase, the spatial phase distribution of a paraxial Gaussian beam reads ϕ=kr22R, where k is the wave number and r the radial distance to the beam center. Note that its spatial gradient is a linear function of the radial position r, with a slope inversely proportional to the radius of the phase front curvature R, explaining the linear trend seen in the measurement. The linear behavior of the relative phase described above is confirmed by the measurements shown in Fig. 4, where the relative phase of Gaussian beams with phase fronts of different curvatures is plotted as a function of the relative shift of the beam with respect to the center of the coupling region of the chip.

    Relative intensity and phase of a Gaussian beam as a function of the relative shift of the beam center with respect to the center of the input of the chip. (a) Measurements are performed with a beam at the design wavelength of the waveguides, λD=658 nm. (b) Measurements are taken using a beam with a wavelength λOD=580 nm, far from the design wavelength of the waveguides.

    Figure 3.Relative intensity and phase of a Gaussian beam as a function of the relative shift of the beam center with respect to the center of the input of the chip. (a) Measurements are performed with a beam at the design wavelength of the waveguides, λD=658  nm. (b) Measurements are taken using a beam with a wavelength λOD=580  nm, far from the design wavelength of the waveguides.

    Relative phase of Gaussian beams of different wavefront curvatures R. All measurements were conducted at the design wavelength of λD=658 nm.

    Figure 4.Relative phase of Gaussian beams of different wavefront curvatures R. All measurements were conducted at the design wavelength of λD=658  nm.

    Note that for all the results presented, each amplitude and phase value are derived from individual measurements. The data is plotted with a common x-axis to simplify interpretation and illustrate a certain systematic pattern. However, the results presented can be considered individual measurements that demonstrate the functionality of the circuit across a variety of different scenarios. The calibration process not only compensates for manufacturing inaccuracies, but it also accounts for the chromatic behavior of the on-chip components. As a result, the chips can be effectively used at wavelengths far from their design value, once calibrated for the desired wavelength. It should be noted that the chip can be calibrated for operation at any wavelength, provided the waveguides and grating couplers are sufficiently efficient. Additionally, the waveguides must remain single-mode, as this is essential for the proper functioning of the Y-branches. If higher-order modes are excited, the theoretical model discussed for the chip is no longer applicable, causing the measurement principle to fail.

    To showcase the broadband capabilities of the presented chip design, scan measurements of non-collimated Gaussian beams were performed at a wavelength of λOD=580  nm, significantly different from the design wavelength of λD=658  nm. Figure 3(b) shows a scan measurement of a Gaussian beam featuring a 1/e2 radius of w=0.32  mm and a phase front curvature radius of R=100  mm. As with the previous measurements at the design wavelength, there is excellent agreement between the experimental results and theory.

    5. FIRST STEPS TOWARDS LARGER STRUCTURES

    After having demonstrated experimentally the capabilities of the passive photonic circuit with respect to phase and intensity measurements, we now discuss a chip architecture featuring more input pixels and show corresponding measurement results. Multipixel architectures enable extended functionality, allowing the extraction of substantial information about the incident light field even with a very limited number of pixels. To showcase the extended capabilities of multipixel architectures, we designed and fabricated a photonic chip featuring a five-pixel input interface. A microscope image of the chip is shown in Fig. 5(a). Again, focusing grating couplers are used as free-space interfaces. The five couplers are arranged in a square, with four pixels at the corners and a fifth in the center. Each of the corner pixels is connected to the central pixel using a phase and intensity measuring unit, similar to the on-chip architecture discussed earlier. The pairwise phase and intensity measurements in the five-pixel chip follow the same principles as the previously discussed two-pixel architectures. This similarity allows the established calibration method to be applied again without any conceptual modifications. Moreover, this specific pixel arrangement not only facilitates the measurement of the relative phase and intensity between the corner pixels and the central pixel, but also enables the reconstruction of the parameters of a paraxial Gaussian beam through a single-shot measurement of the output intensities. A detailed description of the reconstruction of the beam parameters from measured intensity and phase data is provided in Appendix B.

    (a) Optical microscope image of the chip featuring a five-pixel input interface for demonstration of scalability. The input interface consists of five grating couplers functioning as input pixels, arranged in a square configuration with four corner pixels and one central pixel. The on-chip architecture is designed such that each corner pixel is connected to the central pixel via a phase and intensity measurement unit. This design facilitates the complete characterization of a Gaussian beam and its parameters through a single-shot intensity measurement at the outputs. The chip layout was designed to simplify experiments, with some distances intentionally increased, resulting in a total footprint of 6650 μm×1450 μm. (b) Retrieved parameters of a Gaussian beam as a function of the relative displacement of the center of the beam with respect to the center of the input section.

    Figure 5.(a) Optical microscope image of the chip featuring a five-pixel input interface for demonstration of scalability. The input interface consists of five grating couplers functioning as input pixels, arranged in a square configuration with four corner pixels and one central pixel. The on-chip architecture is designed such that each corner pixel is connected to the central pixel via a phase and intensity measurement unit. This design facilitates the complete characterization of a Gaussian beam and its parameters through a single-shot intensity measurement at the outputs. The chip layout was designed to simplify experiments, with some distances intentionally increased, resulting in a total footprint of 6650  μm×1450  μm. (b) Retrieved parameters of a Gaussian beam as a function of the relative displacement of the center of the beam with respect to the center of the input section.

    Figure 5(b) shows retrieved parameters of a Gaussian beam featuring a 1/e2 radius of w=0.23  mm and a phase front curvature radius of R=91  mm at the chip surface. We analyze the spot size, focal distance, tilt angles in the x- and y-directions, as well as the beam shift in the x- and y-directions. The parameters are presented for various y-positions of the beam relative to the center of the chip’s input. It can be seen that the reconstruction of the beam parameters performs well for slight misalignment between the beam and the chip. Although a decrease in accuracy is evident with increasing misalignment, it is important to note that in the data shown, some pixels receive less than 1/e2 of the maximum intensity at a misalignment of 150 μm. Reduced input intensity leads to a decreased signal-to-noise ratio when measuring the output signals, resulting in less accurate phase and parameter reconstruction. Nevertheless, the data demonstrates that the parameters of a Gaussian beam can be accurately determined using a single-shot intensity measurement. Furthermore, the data indicates a significant level of insensitivity of the method with respect to misalignment.

    6. CONCLUSION

    A photonic integrated circuit capable of spatially resolving phase and intensity of visible free-space light has been proposed and experimentally demonstrated. The chip utilizes a fixed set of passive on-chip interferometers, whose output intensity measurements enable the retrieval of the intensity and relative phase information of the incident light field. The capabilities of the circuit have been demonstrated through scan measurements of uncollimated Gaussian beams of varying parameters. Additionally, the potential for broadband application of the structure has been showcased through measurements conducted at different wavelengths, specifically λD=658  nm and λOD=580  nm. Finally, first steps toward larger structures were discussed. A chip featuring five input pixels was presented, and its extended functionality was demonstrated by reconstructing all parameters of a paraxial Gaussian beam from single-shot measurements of its output intensities.

    Notably, recent advancements in integrated photonics could be incorporated into the presented chip design to enhance functionality and integration. Potential modifications include a more expansive and generic input interface, polarization splitting grating couplers [26,27] for resolving also light’s polarization, and on-chip photodiodes [28].

    The presented approach and the actual integrated photonic system constitute a powerful, versatile, and small-footprint addition to the existing toolboxes of light field metrology.

    Acknowledgment

    Acknowledgment. The financial support by the Austrian Federal Ministry of Labor and Economy, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association is gratefully acknowledged.

    APPENDIX A: CALIBRATION OF THE CHIP

    Before conducting measurements, it is essential to determine all the unknown parameters of the chip elements through a precise calibration procedure. The calibration is based on illumination of the chip with known light fields; the output of the chip then provides information about the behavior of the chip components. From this information, we determine proportionality coefficients of the form tij=|tij|eiτij that account for both losses and phase shifts introduced as light propagates from input i to output j. Using these proportionality coefficients, we can derive equations that link the input fields to the output fields of the chip: E1out=t11E1in,E2out=t22E2in,Ejout=t1jE1in+t2jE2in,      for  j=3,4,5,whereby the amplitudes of the complex coefficients |tij| describe all the losses that are introduced by imperfection of components. The phase of the coefficients, on the other hand, describes the phase shifts introduced in the waveguides. These include parasitic phase shifts resulting from manufacturing inaccuracies, as well as intentional phase shifts introduced by the phase shifters. The calibration process for determining tij consists of two steps for the selected chip design.

    In the first step, we determine the amplitude of tij. This is done by individually illuminating the input couplers with light of uniform intensity distribution. If only one input is exposed with light, the output intensities provide information on the relative power throughput from the exposed input to the outputs. The normalized output intensities directly give information about the amplitude of the proportionality coefficients.

    In the second step, we aim to determine the phase information of the proportionality coefficients. This is done by illuminating the input section of the chip by means of a light field of known amplitude and phase distribution. All inputs are now exposed simultaneously, allowing light to enter both input arms of the on-chip interferometers. The relative phase of the waveguide signals determines the output intensity of the interferometers. The phase distribution of the input light is known, allowing the output signals from the interferometers to reveal the additional relative phase shifts introduced by the chip. However, measuring just a single set of amplitude and phase scenarios at the input pixels is not enough to determine the phase shifts accurately, as there is an ambiguity in the sign when extracting the phase from the interference signal. Theoretically, this issue can be resolved by conducting at least two measurements with different scenarios at the inputs. In practice, many more than two measurement points are collected, primarily to enhance the accuracy of the calibration and to account for the additional unknown parameters that need to be determined in this process. There are a variety of options for generating different scenarios at the inputs. The option of our choice is to scan a non-collimated Gaussian beam, as it features a curvature in its phase front, and therefore provides different input phases for different scan positions. This allows us to describe the entire system using Eqs. (A1)–(A3) with E1in and E2in as the electric field of the Gaussian beam at the positions of the input grating couplers. The theory model can then be fitted to measured data, using the relative phase of the proportionality coefficients Δτj=τ1jτ2j, among other variables, as free parameters. The relative phases Δτj can therefore be simply extracted from the fitted model. As described above, only the phase differences of the coefficients that describe the behavior of the on-chip interferometers are determined. However, this is sufficient because only these relative phases are needed for subsequent measurements with the chip.

    To ensure reproducibility and determine systematic errors, this calibration procedure was repeated a total of eight times with beams of four different parameters. This analysis revealed that the on-chip phase shifts exhibit completely random values, deviating significantly from the design parameters. However, the calibration procedure proved to be very robust, with the standard deviation of the determined phase values across different calibration scenarios being only σΔτ,min=0.010  rad to σΔτ,max=0.018  rad.

    It should be noted that a standard least-squares fitting method from MATLAB was used. Furthermore, the calibration was carried out using modular code, which allows scalable logic and can therefore be easily extended to chip designs with arbitrary pixel arrangements and larger pixel counts.

    APPENDIX B: CHARACTERIZING A PARAXIAL GAUSSIAN BEAM FROM PHASE AND INTENSITY INFORMATION OF FIVE PIXELS

    Before we proceed with the calculation of the beam parameters from intensity and phase data obtained with the five-pixel architecture, it is important to first discuss the simplifications applied in the calculation.

    First, it should be noted that, due to the design of the grating coupler, the chip is operated at an angle to the optical axis. This means that the chip is calibrated to a specific angle of incidence, so that if a plane wave strikes the chip at this angle, the phase measurement would yield zero relative phase between all inputs. However, it also means that different input pixels are situated in different planes along the direction of beam propagation. As the wavefront curvature and the Gouy phase of non-collimated Gaussian beams change during propagation, this results in different measured phase values depending on the position of the pixel. Nevertheless, since we only consider very weakly focused beams that are examined relatively far from their focus, and given the small angle of incidence and distance between the grating couplers, this phase contribution is assumed to be negligible and is therefore not treated in our model. Additionally, we only consider beams that hit the chip’s input at an angle that deviates only slightly from the calibration angle. Consequently, we can assume that the change in the distance of the pixels to the optical axis due to tilt is negligible. With all these assumptions, we can build a mathematical model as if the chip surface is placed perpendicular to the optical axis, simplifying calculations significantly. Note that all these assumptions introduce systematic errors in the reconstruction of beam parameters. Nevertheless, the model performs well enough to demonstrate the reconstruction of paraxial Gaussian beams.

    The mathematical description begins with a very simple representation of the electric field of a paraxial Gaussian beam at the position of the chip at the optical axis. Here we neglect polarization and the time harmonic oscillation, leading to the following expression [29]: E(r,z)=w0wE0er2w2ei(kz+kr22RΨ),with E0 as the amplitude at the beam origin, r the radial distance to the center of the beam, w the beam size at the position of the chip, R the radius of the wavefront curvature at the position of the chip, Ψ the Gouy phase, w0 the spot size at the focal point, and k the wavenumber.

    We can now separate this equation into two parts to analyze the amplitude and phase independently. Starting with the amplitude parts, we can express measured input intensity values at input i using the proportionality IE: Iiinw0wE0e(riinw)2.

    The measured intensities at the positions of the input pixels provide information about the relative position of the beam in the plane of the chip and its size w. To retrieve these parameters, we first represent the radial distance r at the position of the input pixel i in Cartesian coordinates: (riin)2=(xiinx0)2+(yiiny0)2,with x0 and y0 as the displacements of the beam with respect to the center of the input section of the chip. Next, we rotate the coordinate system by 45 deg, as illustrated in Fig. 6, which will later demonstrate a significant simplification in the calculations. The new coordinates read as follows: x˜iin=12(xiin+yiin),y˜iin=12(xiin+yiin).

    Illustration of the coordinate transformation.

    Figure 6.Illustration of the coordinate transformation.

    By substituting the new coordinate system into Eq. (B2), we can derive the following two equations: I1inI3in=e1w2[y˜02(y˜3iny˜0)2],I1inI5in=e1w2[y˜02(y˜5iny˜0)2].

    Isolating w in Eq. (B6) and inserting it into Eq. (B7), as well as using y˜3in=y˜5in, we can find an expression for the displacement in y˜-direction: y˜0=y˜3in[ln(I1inI5in)ln(I1inI3in)]2[ln(I1inI5in)+ln(I1inI3in)].

    Repeating this procedure with the intensities at inputs 1, 2, and 4 reveals the displacement in the x˜-direction: x˜0=x˜3in[ln(I1inI4in)ln(I1inI2in)]2[ln(I1inI4in)+ln(I1inI2in)].

    Knowing x˜0 and y˜0 we can now calculate the radial distance of each pixel to the beam center riin. Furthermore, by rearranging Eq. (B6) we can find an expression for the beam size w, measured in the y˜-direction: wy˜=2[(r1in)2(r3in)2]ln(I1inI3in).

    The same procedure can be applied to the equations for the intensities of inputs 1 and 4 to obtain an expression for the beam size, measured in the x˜-direction: wx˜=2[(r1in)2(r4in)2]ln(I1inI4in).

    Using the measured intensity values, we can determine both the in-plane displacement and the size of the beam at the position of the chip.

    Next, we aim to determine the curvature of the wavefront and the tilt of the beam, which requires a closer examination of the measured relative phase values. The relative phase between two inputs i and j can be expressed as follows: Δϕijin=k(riin)2(rjin)22R±ϕtilt,with Δϕijin=ϕiinϕjin. We assume that both inputs experience the same Gouy phase. Furthermore, we reformulate the propagation terms of the relative phase to a phase term ϕtilt that appears when a tilt between the chip surface normal and the optical axis is present. Initially, we incorporate both signs for the tilt term in Eq. (B12), as the sign of the tilt term depends on the relative positions of the two couplers being analyzed. By adding Eq. (B12) for the relative phases Δϕ13in and Δϕ15in and isolating R, we can derive a representation for the radius of the wavefront curvature in the y˜-direction: Ry˜=k{2(r1in)2[(r3in)2+(r5in)2]}2(Δϕ13in+Δϕ15in).

    Similarly, we can perform the same process for the relative phases Δϕ12in and Δϕ14in to derive an expression for the radius of the wavefront curvature in the x˜-direction: Rx˜=k{2(r1in)2[(r2in)2+(r4in)2]}2(Δϕ12in+Δϕ14in).

    Using an analogous procedure, we can ultimately derive the representation for the tilt terms associated with tilting around the x˜- and y˜-axes: ϕtiltx˜=12(Δϕ15inΔϕ13in)+[(r3in)2+(r5in)2][Δϕ13in+Δϕ15in]2[2(r1in)2(r3in)2(r5in)2],ϕtilty˜=12(Δϕ14inΔϕ12in)+[(r2in)2+(r4in)2][Δϕ12in+Δϕ14in]2[2(r1in)2(r2in)2(r4in)2].

    The tilt angles between the structure and the incident beam, associated with the measured phase values, can subsequently be determined using straightforward geometric considerations.

    Above, we discussed the parameters of a paraxial Gaussian beam at the chip position. However, for a comprehensive characterization of the beam, it is also interesting to know its focal spot size w0 and the distance to its focal plane z. These two parameters can be calculated using the previously determined beam size and wavefront curvature. To accomplish this, we first describe the beam size and wavefront curvature with the following expressions [29]: R(z)=z[1(zRz)2],w(z)=w01+(zzR),where zR is the Rayleigh range, which is defined as follows: zR=πw02nλ,with the refractive index of the medium n and the wavelength λ. We can now substitute the average of the previously determined wavefront curvature and beam size, with Rc=Rx˜+Ry˜2 and wc=wx˜+wy˜2. Transforming and substituting Eqs. (B17) and (B18), we can then derive expressions for z and w0: z=Rc(wc2πn)2λ2Rc2+(wc2πn)2,w0=RcwcRc2+(wc2πnλ)2.

    The preceding calculation illustrates how a paraxial Gaussian beam can be characterized through a single measurement of the input field’s intensities and relative phases at five pixel positions. We have derived an analytical description that allows us to determine the displacement of the beam relative to the chip, the beam’s inclination, the position of the focal point, and the size of the focal spot.

    APPENDIX C: CHIP DESIGN AND BANDWIDTH CONSIDERATIONS

    The design of the waveguide cross section is crucial to the performance of the presented chip, as it influences the number of supported modes and their propagation characteristics. Here, we discuss the design of the waveguide cross section and its effect on the bandwidth of the presented on-chip architecture.

    Waveguide Cross Section

    The integrated circuit was designed and fabricated in a 100-nm-thick silicon nitride (SiN) platform with a silicon dioxide bottom cladding and a silica-like top cladding (see Appendix E for fabrication details). The effective index of fundamental and higher-order modes calculated through finite difference eigenmode (FDE) simulations at different wavelengths across the visible spectral range shows that the waveguide width must be kept below 600 nm to ensure single-mode propagation at the design wavelength of 658 nm [see Fig. 7(a)]. To enable circuit operation at shorter wavelengths while maintaining good modal confinement at the design wavelength, a waveguide width of 500 nm was used for the integrated systems, allowing for single-mode operation down to a wavelength of 580 nm.

    Bandwidth Considerations

    The broadband design of most circuit components, combined with a calibration procedure that compensates for their inherent chromatic behavior, enables the sensor’s effective operation across different wavelengths. However, the constraint for the measurement principle to work is that the waveguides must strictly support only the fundamental modes; otherwise, the theoretical model of the chip becomes invalid. In Fig. 7(b) it can be observed that the TE1 mode begins to be supported at wavelengths just below 580 nm. Therefore, it can be assumed that issues with the measurement principle may arise below this threshold wavelength.

    Passive Phase Shifter Design

    The passive phase shifters were implemented by introducing two waveguide branches with different widths. By varying the width of the waveguide, the effective refractive index experienced by the waveguide mode is tuned, which alters the optical path length. As a result, by using different widths for the two waveguide branches, different optical path lengths are created, leading to a relative phase difference between the two waveguide modes at the output of the phase shifter [24].

    The widths of the two branches were chosen to be 1700 nm and 1900 nm, resulting in a 90-deg phase shift for a 93-μm-long section, and a 135-deg phase shift for a 140-μm-long section at a wavelength of 658 nm. The transition between the 500-nm-wide single-mode waveguide and the wider section was realized through 50-μm-long tapers to ensure adiabatic mode propagation.

    However, the phase shifters exhibited random deviations from the target phase shift, producing repeatable results on a given device, but differing between different nominally identical devices. This behavior is most likely due to random width variability and surface roughness of the waveguide interfaces, which can introduce random phase shifts that build up to non-negligible values across the relatively long waveguides [30,31], thus resulting in unpredictable behavior among different devices. Nonetheless, despite the unpredictability of the phase shifts, the calibration procedure is robust enough to overcome the issue and allows for precise phase-front sensing.

    APPENDIX D: ALTERNATIVE CHIP ARCHITECTURE

    Here, we present an alternative architecture for the passive on-chip interferometers. In this approach, the interferometers are designed using directional couplers instead of Y-branch combiners. Figure 8 shows an optical microscope image of a chip featuring directional coupler interferometers. Throughout this work, chips with this architecture were also calibrated and analyzed. Both the measurement and calibration methods are applicable to these chips without any conceptual modifications. In testing, the alternative design exhibited performance comparable to the results discussed in the main text. However, we believe that Y-branch combiners offer certain advantages for the application presented here, including a smaller footprint and reduced chromatic dispersion. Therefore, we propose the chip design shown in Fig. 8 as a substitute, while focusing on the other design in the main text.

    (a) Effective index of fundamental and first order TE modes at three different wavelengths within the visible spectrum as a function of the waveguide width in our SiN platform. (b) Effective index of fundamental and first order TE and TM modes as a function of wavelength for a waveguide width of 500 nm.

    Figure 7.(a) Effective index of fundamental and first order TE modes at three different wavelengths within the visible spectrum as a function of the waveguide width in our SiN platform. (b) Effective index of fundamental and first order TE and TM modes as a function of wavelength for a waveguide width of 500 nm.

    Optical microscopy image of a chip featuring an alternative architecture. The on-chip interferometers consist of directional couplers.

    Figure 8.Optical microscopy image of a chip featuring an alternative architecture. The on-chip interferometers consist of directional couplers.

    APPENDIX E: FABRICATION

    The photonic integrated chip was fabricated on a material platform purchased from LioniX International and consisting of a 100-nm-thick SiN film deposited through low-pressure chemical vapor deposition on an 8-μm-thick thermal silicon dioxide layer, mechanically supported by a 500-μm-thick silicon substrate. A 4-inch wafer was diced in 20  mm×20  mm chips, and the integrated circuit was fabricated through e-beam lithography and plasma etching techniques.

    The integrated circuit was patterned onto the chip by spinning a 270-nm-thick layer of hydrogen silsesquioxane (HSQ), which was exposed at a dose of 1300 μC/cm2 and developed in 25% tetramethylammonium hydroxide (TMAH). The pattern was then transferred to the SiN film by inductively coupled plasma (ICP) and reactive ion etching (RIE) using a CHF3/N2/O2-based chemistry optimized to ensure good sidewall verticality. Scanning electron microscopy (SEM) images of a waveguide, a Y-branch, and a grating coupler are shown in Fig. 9. A 500-nm-thick HSQ layer was finally spun onto the chip and thermally cured to obtain an upper cladding with optical properties close to that of silicon dioxide.

    (a) SEM image of etched SiN waveguide with residual HSQ mask. (b) SEM image of the fabricated Y-branch. (c) SEM image of the fabricated surface grating coupler.

    Figure 9.(a) SEM image of etched SiN waveguide with residual HSQ mask. (b) SEM image of the fabricated Y-branch. (c) SEM image of the fabricated surface grating coupler.

    References

    [1] R. Soref. The past, present, and future of silicon photonics. IEEE J. Sel. Top. Quantum Electron., 12, 1678-1687(2006).

    [2] D. Thomson, A. Zilkie, J. E. Bowers. Roadmap on silicon photonics. J. Opt., 18, 073003(2016).

    [3] J. K. S. Poon, A. Govdeli, A. Sharma. Silicon photonics for the visible and near-infrared spectrum. Adv. Opt. Photonics, 16, 1-59(2024).

    [4] Y. Park, C. Depeursinge, G. Popescu. Quantitative phase imaging in biomedicine. Nat. Photonics, 12, 578-589(2018).

    [5] D. Huang, E. A. Swanson, C. P. Lin. Optical coherence tomography. Science, 254, 1178-1181(1991).

    [6] D. A. B. Miller. Establishing optimal wave communication channels automatically. J. Lightwave Technol., 31, 3987-3994(2013).

    [7] A. E. Willner, K. Pang, H. Song. Orbital angular momentum of light for communications. Appl. Phys. Rev., 8, 041312(2021).

    [8] P. Török, F.-J. Kao. Optical Imaging and Microscopy: Techniques and Advanced Systems(2007).

    [9] J. S. Eismann, M. Neugebauer, K. Mantel. Absolute characterization of high numerical aperture microscope objectives utilizing a dipole scatterer. Light Sci. Appl., 10, 223(2021).

    [10] H. Nomura, T. Sato. Techniques for measuring aberrations in lenses used in photolithography with printed patterns. Appl. Opt., 38, 2800-2807(1999).

    [11] M. Ma, X. Wang, F. Wang. Aberration measurement of projection optics in lithographic tools based on two-beam interference theory. Appl. Opt., 45, 8200-8208(2006).

    [12] B. C. Platt, R. Shack. History and principles of Shack-Hartmann wavefront sensing. J. Refract. Surg., 17, S573-S577(2001).

    [13] P. Hariharan. Basics of Interferometry(2007).

    [14] E. Ip, A. P. T. Lau, D. J. F. Barros. Coherent detection in optical fiber systems. Opt. Express, 16, 753-791(2008).

    [15] C. Rogers, A. Y. Piggott, D. J. Thomson. A universal 3D imaging sensor on a silicon photonics platform. Nature, 590, 256-261(2021).

    [16] D. A. B. Miller. Analyzing and generating multimode optical fields using self-configuring networks. Optica, 7, 794-801(2020).

    [17] J. Bütow, J. S. Eismann, M. Milanizadeh. Spatially resolving amplitude and phase of light with a reconfigurable photonic integrated circuit. Optica, 9, 939-946(2022).

    [18] Z. Sun, S. Pai, C. Valdez. Scalable low-latency optical phase sensor array. Optica, 10, 1165-1172(2023).

    [19] H. Nejadriahi, S. Pappert, Y. Fainman. Efficient and compact thermo-optic phase shifter in silicon-rich silicon nitride. Opt. Lett., 46, 4646-4649(2021).

    [20] W. D. Sacher, X. Luo, Y. Yang. Visible-light silicon nitride waveguide devices and implantable neurophotonic probes on thinned 200 mm silicon wafers. Opt. Express, 27, 37400-37418(2019).

    [21] E. McKay, N. G. Pruiti, S. May. High-confinement alumina waveguides with sub-dB/cm propagation losses at 450 nm. Sci. Rep., 13, 19917(2023).

    [22] F. Van Laere, T. Claes, J. Schrauwen. Compact focusing grating couplers for silicon-on-insulator integrated circuits. IEEE Photonics Technol. Lett., 19, 1919-1921(2007).

    [23] Y. Zhang, S. Yang, A. E.-J. Lim. A compact and low loss Y-junction for submicron silicon waveguide. Opt. Express, 21, 1310-1316(2013).

    [24] D. González-Andrade, J. M. Luque-González, J. G. Wangüemert-Pérez. Ultra-broadband nanophotonic phase shifter based on subwavelength metamaterial waveguides. Photonics Res., 8, 359-367(2020).

    [25] A. Khachaturian, R. Fatemi, A. Hajimiri. IQ photonic receiver for coherent imaging with a scalable aperture. IEEE Open J. Solid-State Circuits Soc., 1, 263-270(2021).

    [26] L. Cheng, S. Mao, Z. Li. Grating couplers on silicon photonics: design principles, emerging trends and practical issues. Micromachines, 11, 666(2020).

    [27] L. Su, R. Trivedi, N. V. Sapra. Fully-automated optimization of grating couplers. Opt. Express, 26, 4023-4034(2018).

    [28] C. D. Vita, F. Toso, N. G. Pruiti. Amorphous-silicon visible-light detector integrated on silicon nitride waveguides. Opt. Lett., 47, 2598-2601(2022).

    [29] O. Svelto. Principles of Lasers(2010).

    [30] U. Khan, Y. Xing, W. Bogaerts. Effect of fabrication imperfections on the performance of silicon-on-insulator arrayed waveguide gratings. IEEE Photonics Benelux Chapter/Annual Symposium, 1-5(2018).

    [31] U. Khan, M. Fiers, Y. Xing. Experimental phase-error extraction and modelling in silicon photonic arrayed waveguide gratings. Proc. SPIE, 11285, 1128510(2020).

    Christoph Stockinger, Jörg S. Eismann, Natale Pruiti, Marc Sorel, Peter Banzer, "Passive silicon nitride integrated photonics for spatial intensity and phase sensing of visible light," Photonics Res. 13, 1699 (2025)
    Download Citation