• Chinese Optics Letters
  • Vol. 21, Issue 9, 090008 (2023)
Wunan Li1、2、3、4, Yu Cao2, Yu Ning1、3、4, Fengjie Xi1、3、4、**, Quan Sun1、3、4, and Xiaojun Xu1、3、4、*
Author Affiliations
  • 1College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha 410073, China
  • 2School of Mathematics and Physics, Qingdao University of Science & Technology, Qingdao 266061, China
  • 3Nanhu Laser Laboratory, National University of Defense Technology, Changsha 410073, China
  • 4State Key Laboratory of Pulsed Power Laser Technology, National University of Defense Technology, Changsha 410073, China
  • show less
    DOI: 10.3788/COL202321.090008 Cite this Article Set citation alerts
    Wunan Li, Yu Cao, Yu Ning, Fengjie Xi, Quan Sun, Xiaojun Xu. Single-pixel wavefront sensing via vectorial polarization modulation [Invited][J]. Chinese Optics Letters, 2023, 21(9): 090008 Copy Citation Text show less

    Abstract

    The Shack–Hartmann wavefront sensor (SHWFS) is commonly used for its high speed and precision in adaptive optics. However, its performance is limited in low light conditions, particularly when observing faint objects in astronomical applications. Instead of a pixelated detector, we present a new approach for wavefront sensing using a single-pixel detector, which is able to code the spatial position of a light spot array into the polarization dimension and decode the polarization state in the polar coordinate. We propose validation experiments with simple and complex wavefront distortions to demonstrate our approach as a promising alternative to traditional SHWFS systems, with potential applications in a wide range of fields.

    1. Introduction

    Adaptive optics (AO) systems are developed to measure and correct wavefront distortions caused by imperfections, scattering media, and atmospheric turbulence[13]. They are widely used in a variety of fields, including astronomy, the military, and medicine, to improve the performance of optical systems and achieve better image quality[46]. Wavefront sensing is a crucial component of AO systems, which directly determines the precision of wavefront correction. Among all the wavefront sensors, the Shack–Hartmann wavefront sensor (SHWFS) is the most widely used for its speed, simplicity, and effectiveness in measuring localized wavefront tilts[710]. However, with the continuous deepening of astronomical observations, SHWFS encounters challenges when detecting dim targets, owing to insufficient signal-to-noise ratios, which lead to more difficulties in closed-loop AO systems, and even correction failure[1113].

    To address this issue, numerous efforts have been made, which can mainly be summarized from two aspects. In signal processing, researchers focus on improving the algorithms that have a greater impact on wavefront detection accuracy[1418], such as spot centroid extraction, robust wavefront reconstruction, and deep learning assistance. Since a typical SHWFS setup consists of a microlens array (MLA) and a pixelated detector, manufacturing a pixelated detector with lower noise and broader spectral responses has also attracted much attention[1921]. Notably, C-RED One (an ultra-low noise infrared camera based on the Saphira detector, fabricated by First Light Imaging) may be the most successful and famous in hardware progress, which has been the first choice of infrared dim wavefront detection since its release. However, compared with pixelated detectors, single-pixel detectors (SDs) have several advantages, including relatively low dark-noise production, high sensitivity, large bandwidth, and low price, which have demonstrated their superiority in various applications, such as microscopy[2224], spectral imaging[2527], polarization imaging[2830], three-dimensional imaging[3133], and ultrasound field mapping[34,35]. As for wavefront sensing, SDs also show impressive potential for dim target and invisible wavelength detection[3638], benefiting from the characteristics of the detector itself. Unfortunately, all the applications experience difficulties due to the limitation of single-pixel imaging (SPI) architecture, and the majority of them require the cooperation of digital micromirror devices (DMDs) and compressive sensing (CS) algorithms[27,37].

    In practical terms, imaging is not the essential application of SHWFS; however, its centroid positioning capability is at the core of effective wavefront sensing. Therefore, in this paper, we propose a novel single-pixel wavefront sensing (SPWS) system based on vectorial polarization modulation (VPM). By using a vortex retarder (VR) to generate a vectorial polarized distribution, the spatial position of each light spot was coded into a polarization dimension. The SD was then used to decode the polarization state and map the centroid changes in polar coordinates. Furthermore, SPWS misses the imaging step that an SD is not particularly suitable for and utilizes its excellent intensity detection performance to calculate the centroid directly, which also avoids damage to the wavefront detection efficiency caused by imaging and CS[3941]. The validation experimental results show a commendable level of detection accuracy for wavefront reconstruction, leading to application potentials in astronomy, optical communication, remote sensing, etc.

    The remaining parts of the article are arranged as follows: Section 2.1 introduces the method of vectorial polarization modulation; Section 2.2 describes the centroid decoding progress in a polar coordinate system; Section 3 shows the numerical simulation results; Section 4 shows the experimental setup and results; and finally, we conclude the paper in Section 5.

    2. Theory

    Traditional SHWFSs mainly consist of an MLA and a pixelated detector. The MLA array divides the complete wavefront into several smaller subapertures and focuses them into a spot array on the pixelated detector[7,10]. When the incident light is an ideal plane wave as a reference, the spot array is uniformly and regularly arranged. However, when the incident light has wavefront distortion, the spot array will shift according to the degree of distortion. By comparing the offset of a spot array to the reference centroid, the wavefront phase distribution can be reconstructed through the wavefront restoration algorithm. Unlike traditional SHWFSs, although SPWS systems retain MLAs to generate spot arrays, they take advantage of VR to modulate the polarization state distribution of a spot array and decode every spot centroid directly using an SD.

    2.1. Vectorial polarization modulation

    For the unit birefringent crystal, the polarization states along the long axis and short axis have different refractive indices of no and ne, respectively. When incident light passes through a birefringent crystal with thickness d, the phase difference between the two axes can be described as follows[4245]: Δ=ϕoϕe=2πλ(none)d,where λ is the wavelength of the incident light.

    For incident light E1 specified in the x and y directions, the electric vector could be further decomposed into Ex and Ey components. Considering that the x and y directions may not coincide with the crystal’s short and long axes, the rotation matrix R(θ) is introduced. At this point, incident light E1 and emergent light E2 can be related through the Jones matrix J0 of the crystal. Furthermore, when the phase difference Δ is adjusted to π, which can be realized by controlling d to find the half-wave condition, the relationship between E1 and E2 can be simplified with Euler’s formula as follows: J0=[eiϕe00eiϕo]=eiϕe[100eiΔ],E2=R(θ)J0R(θ)E1=eiϕe[cos2θsin2θsin2θcos2θ][ExEy],where θ is the included angle between the x direction and the long axis. Additionally, to facilitate the analysis, we set the topological charge m to 1, which is not reflected in the above equations. For the light spots slightly focused by the MLA, the aforementioned formula remains applicable under the paraxial and slowly varying envelope approximations.

    Thus, we can change the periodic distribution of the long axis to realize the vectorial polarization modulation. The complete modulation process is shown in Fig. 1, where the long axis direction of VR uniformly changes from 0° to 180° within the circumference. Moreover, when the angle of polarization (AOP) of incident light coincides with the 0° direction of VR’s long axis, the emergent light will produce a radial polarization distribution, where AOP uniformly changes from 0° to 360° within the circumference[42,43]. Certainly, similar modulating effects can also be observed in other vectorial polarization distributions that possess rotational symmetry and continuous gradient changes, such as the azimuthal polarization distribution. However, this paper focuses specifically on using the radial polarization distribution as an example.

    Vectorial polarization modulation process. The section on the left shows the polarization distribution of incident light, while the section on the right shows the polarization distribution of emergent light. Additionally, the yellow beam simulates the scenario where the light has been focused by different subapertures of the MLA.

    Figure 1.Vectorial polarization modulation process. The section on the left shows the polarization distribution of incident light, while the section on the right shows the polarization distribution of emergent light. Additionally, the yellow beam simulates the scenario where the light has been focused by different subapertures of the MLA.

    2.2. Polarization decoding procedure in the polar coordinate system

    After the vectorial polarization encoding procedure at the focal plane of the MLA, the polarization state distribution within each spot changed with its position, as shown in Fig. 2 for four typical positions. Among them, Figs. 2(a), 2(b), and 2(c) correspond to the polarization state distribution of the lower-, middle-, and upper-focusing light spots in Fig. 1, respectively. Figure 2(d) represents the polarization state distribution of the infinity light spot on VR, which is assumed to illustrate the polarization decoding process. When the SD calculated the polarization state of a certain spot, we obtained a result by superimposing all polarization vectors according to their weights, which was guaranteed by the continuous gradient and local symmetry of the vectorial polarization modulation in Section 2.1. Additionally, this is also key for SPWS schemes to skip the cumbersome imaging part, since the calculated overall polarization state (OPS) can be directly used to describe the spot centroid.

    Magnified images of four typical positions. The light spot in Fig. 2(a) is located at the VR’s center, while the light spot in Fig. 2(d) is infinitely far from the VR’s center, which, in reality, is impossible. Figs. 2(b) and 2(c) are the intermediate states between the center and infinity, with the light spot in Fig. 2(b) being closer to the VR’s center.

    Figure 2.Magnified images of four typical positions. The light spot in Fig. 2(a) is located at the VR’s center, while the light spot in Fig. 2(d) is infinitely far from the VR’s center, which, in reality, is impossible. Figs. 2(b) and 2(c) are the intermediate states between the center and infinity, with the light spot in Fig. 2(b) being closer to the VR’s center.

    Specifically, OPS is composed of the overall polarization angle (OPA) and the overall polarization degree (OPD). Additionally, in the SPWS scheme, we used both OPA and OPD to describe the two dimensions of the polar coordinate system, respectively, while most vectorial polarization modulations only focused on the distribution of AOP[46,47]. After obtaining the intensity data using the SD, we established the following contact for the overall results[43,46,48]: (IQUV)=((I0°+I45°+I90°+I135°)/2I0°I90°I45°I135°0),OPD=Q2+U2+V2I=Q2+U2I,OPA=12arctan(UQ),where I, Q, U, and V are the four components of the Stokes vector S(I,Q,U,V). Specifically, V represents the difference between the right circular polarization and the left circular polarization, which can be assumed to be 0 for the linearly polarized incident light in Fig. 1[48]. I0°,I45°,I90°, and I135° represent the respective light intensities of the SD in four polarization directions.

    OPA reflects the angle between the spot centroid and VR’s center in the vectorial polarization distribution. After the 0° polarization direction is determined, there is a twofold relationship between them[43]. On the other hand, OPD reflects the distance between the spot centroid and the VR’s center. As Fig. 2 illustrates their correspondence sufficiently, the smaller the OPD, the smaller the distance, and vice versa. The polarization distribution in Fig. 2(a) is spatially unpolarized; thus, its OPD is 0, corresponding to a distance of zero between the spot centroid and the VR’s center. On the contrary, the polarization distribution in Fig. 2(d) is completely polarized; thus, its OPD is 1, corresponding to an infinite distance, while Figs. 2(b) and 2(c) represent the intermediate states of partial polarization, and their OPDs correspond to different distances. From the above analysis, the spot centroid determined by OPA and OPD formally fully conformed to the characteristics of the polar coordinates (ρ,α), and satisfied the following relationship with the polar angle α and the polar radius ρ: α=2·OPA,ρ=f(OPD),where f() stands for the corresponding relationship between ρ and OPD, which is discussed more extensively in Section 3. Subsequently, we can further transform the polar coordinate system to the Cartesian coordinate system as follows: xc=ρ·cosα,yc=ρ·sinα,where xc and yc correspond to the horizontal and vertical coordinates of the spot centroid in the Cartesian coordinate system, respectively.

    3. Numerical Simulation

    To investigate the calculation accuracy of the spot centroid under different conditions and then evaluate the performance of SPWS, a series of simulations were executed. The common parameters for all numerical simulations are shown in Table 1.

    ParametersValues
    Resolution1024 × 1024
    Wavelength (λ)532 nm
    Topological charge (m)0.5

    Table 1. Common Parameters for Numerical Simulations

    3.1. Simulation setup

    According to the method and theory in Section 2, the process of generating vectorial polarization distribution was simulated using MATLAB. The topological charge of the VR was set to 0.5, considering that the period of AOP was 180°. As shown in Fig. 3(a), AOP rotates 180° within the circumference; thus, each light spot has a unique OPA in the polar coordinate system.

    Numerical simulation of a vectorial polarization distribution. Since the simulation is in an ideal situation, the angles of polarization change homogeneously, and the boundary between 0° and 180° is very clear, which is difficult to observe in real experiments.

    Figure 3.Numerical simulation of a vectorial polarization distribution. Since the simulation is in an ideal situation, the angles of polarization change homogeneously, and the boundary between 0° and 180° is very clear, which is difficult to observe in real experiments.

    Due to the requirement that the calculation area must be predetermined for OPD, we divided Fig. 3 into 974×974 complete circular areas with 25 pixels as the radius, and calculated the OPD for all areas (the spot shape was approximated as a circle). In order to facilitate the comparison of OPD at different positions, we selected 974 areas of OPD along the diagonal of Fig. 3 for display, as shown in the red graph of Fig. 4. Moreover, the simulation results matched the theoretical analysis in Section 2.2 very well. The OPD was obtained at a minimum, which is very close to 0 at the VR’s center, and it became larger as the selected circular area moved away from the center. The OPD changed drastically in the area near the VR’s center, showing a deep V structure. After leaving the central area, the OPDs were very close to each other, with little differentiation. Therefore, the detection precision of SPWS is very high at the center of each pitch; it will correspondingly decrease at the edge, which is an inherent drawback of our scheme. In order to increase the differentiation of the OPD in the cell edge and minimize the inconsistencies in detection accuracy, we created a new parameter of reverse polarization degree (RPD), which is defined as follows: RPD=log(11OPD).

    Simulation results of OPD and RPD. The value range of RPD has been significantly improved compared with OPD; even the areas selected from the cell edge can be distinguished from each other.

    Figure 4.Simulation results of OPD and RPD. The value range of RPD has been significantly improved compared with OPD; even the areas selected from the cell edge can be distinguished from each other.

    As shown in the black graph in Fig. 4, the linearity of the RPD is superior to that of OPD, guaranteeing detection effectiveness.

    So far, a polar coordinate system composed of OPA and RPD has been established, corresponding to polar angle and polar radius, respectively. Each spot within it will be provided with a unique set of OPA and RPD, which is also the core foundation for SPWS to conduct effective wavefront detection.

    3.2. Analysis of polarization centroid localization

    In order to validate the effectiveness of the polarization centroid localization method, we symmetrically selected nine spot centroids in the polar coordinates in Fig. 3. Then, we circled nine areas to simulate the polarization modulation on the VR with a radius of 25 pixels, as shown in Fig. 5. Next, we calculated the OPA and RPD of each simulated light spot and utilized them in Eqs. (7), (8), and (11) to obtain the polar coordinates of each centroid. Figure 6 shows the decoding errors between the calculated centroid and the appointed centroid as follows. The maximum error was 3.366 pixels, the minimum error was 0.001 pixels, and the average error was 1.834 pixels with OPD. The maximum error was 1.677 pixels, the minimum error was 0.261 pixels, and the average error was 0.853 pixels with RPD. Compared to the OPD method, the average error of RPD was reduced by 53.5%, which is consistent with the analysis provided in Section 3.1.

    Nine simulated light spots. Every pixel is endowed with an independent polarization state, forming a unique set of OPA and RPD.

    Figure 5.Nine simulated light spots. Every pixel is endowed with an independent polarization state, forming a unique set of OPA and RPD.

    Comparison of decoding errors between OPD and RPD.

    Figure 6.Comparison of decoding errors between OPD and RPD.

    Maintaining the selected nine spot centroids, we elevated the area’s radius from 10 to 100, and the changes in the decoding error are shown in Fig. 7. With the increase in radius, the average error decreased from 1.16 to 0.33 pixels. In general, the spot size had a small impact on the calculation accuracy of OPA; however, the accuracy of RPD increased with the radius, along with the effectiveness and precision of wavefront sensing. Therefore, our scheme welcomed larger sizes of light spots under the same pitch of the MLA, which always means a trade-off between accuracy and dynamic range. Notably, the specific situations may be more complex because, in many cases, the spot shape is not exactly circular.

    Polarization centroid decoding error for different radii.

    Figure 7.Polarization centroid decoding error for different radii.

    4. Experiment

    4.1. Experimental setup

    As shown in Fig. 8(a), an optical experiment was designed to verify the effectiveness of SPWS. The light source of the optical system was a collimated laser diode module (CPS532; Thorlabs) with a wavelength of 532 nm and an emission power of 4.5 mW. After passing through a beam expander (BE), the beam entered a beam splitter (BS) and reached a spatial light modulator (SLM; HDSLM80R; UPOLabs). Subsequently, while the beam was vertically reflected by the SLM, the phase of the beam was modulated accordingly, after which it was reflected by the BS, and passed through a neutral density filter (ND; NE07B-B; Thorlabs), a linear polarizer (LP1), and an MLA (10mm×10mm; 500 µm pitch; Edmund). Moreover, the beam was converged into a spot array at the focal plane of the MLA, and the phase information of the incident light was encoded into the spatial coordinates of the spots. Simultaneously, the position information was secondary-encoded into the polarization dimension by the VR at the focal plane of the MLA. Due to the small effective focal length of the MLA, the spot array was imaged by a relay lens (RL) with a magnification of 2×. Subsequently, the beam was split into two parts by the BS in a 50:50 proportion. One beam entered the polarization camera (PC; BFS-U3-51S5P; FLIR) and was directly imaged on the CMOS chip as a reference, which is shown in Fig. 8(c). The other beam passed through a linear polarizer (LP2) in a certain polarization direction and then entered a square pinhole (SP; S1000QK; Thorlabs). Finally, the beam arrived at the effective area of an SD (PDA-100A2; Thorlabs), which was then digitized by a data acquisition card (DAC; USB-6251; National Instrument) with a sampling rate of 1 MSa/s (not shown in the figure). Among them, SP was fixed on the SD, as shown in Fig. 8(b), and they were both installed on a translation stage, which could produce precise 2D translations in a plane perpendicular to the optical axis.

    (a) Experimental setup. BE, beam expander; BS, beam splitter; SLM, spatial light modulator; ND, neutral density filter; LP, linear polarizer; MLA, microlens array; VR, vortex retarder; RL, relay lens; PC, polarization camera; SP, square pinhole; SD, single-pixel detector; TS, translation stage. (b) SD installed with SP. (c) FLIR polarization camera.

    Figure 8.(a) Experimental setup. BE, beam expander; BS, beam splitter; SLM, spatial light modulator; ND, neutral density filter; LP, linear polarizer; MLA, microlens array; VR, vortex retarder; RL, relay lens; PC, polarization camera; SP, square pinhole; SD, single-pixel detector; TS, translation stage. (b) SD installed with SP. (c) FLIR polarization camera.

    In the optical system, an SLM is employed to generate the aberration wavefront. However, in order to obtain the reference centroid position, we initially used the SLM as a flat mirror. Due to the pitch of the MLA and the magnification of the RL, we chose a 1000 µm square pinhole to screen the light spots, maintaining the dynamic range of wavefront sensing. During the experiment, the polarization direction of LP1 was rotated to match the 0° direction of the VR’s long axis, as discussed in Section 2.1. Furthermore, at first, the polarization direction of LP2 was adjusted to 0°, after which the translation stage was moved to record the intensity of the light spots one by one. When the intensity of all light spots was recorded, the polarization direction of LP2 was successfully turned to 45°, 90°, and 135°, and the above process of intensity recording was repeated[46,48]. Finally, according to the intensity information of the four groups, the OPA and RPD of each light spot were calculated, and the polar coordinates of the spot centroids were decoded (according to Section 2.2). When some established wavefront distortion is loaded onto the SLM, the offset of the spot centroids can be obtained by repeating the above operations, and then the wavefront reconstruction can be completed with the previous reference obtained.

    4.2. Experimental results

    Since the voltage signals obtained on the SD were discrete, we used a polarization camera to record the polarization state of the spot array in order to effectively display the vectorial polarization modulation effect. Specifically, Fig. 9(a) depicts the vectorial polarization distribution in the absence of the MLA within the optical system. Meanwhile, Fig. 9(b) presents the polarization distribution of the reference spot array without wavefront distortion, clearly illustrating the completion of polarization modulation for each spot. This is evident, as different colors represent distinct AOPs. Next, we loaded a simple spherical wave onto the SLM; the distorted spot array and the phase map of the spherical wave are shown in Figs. 9(c) and 9(d), respectively. Based on the previous discussion, we calculated the wavefront slope according to the polarization state changes of each spot in Figs. 9(b) and 9(c), after which the reconstructed wavefront and residual wavefront were observed, as shown in Figs. 9(e) and 9(f), respectively. The color representation in Figs. 9(a)9(c) corresponds to the AOP variation, ranging from 0° to 180°, whereas the color mapping in Figs. 9(d)9(f) signifies the phase shift variations from 2π to 2π. For a spherical wave, the root mean square (RMS) of the incident wavefront was found to be 0.9015λ. On the other hand, the RMS of the residual wavefront, measured as 0.0972λ, accounting for merely 10.78% of the input value. Although spherical waves exhibit inherent symmetry, the preliminary experimental results demonstrated the effectiveness and reliability of SPWS.

    Wavefront reconstruction results of the spherical wave. (a) Vectorial polarization distribution; (b) polarization spot array without wavefront distortion; (c) polarization spot array under the spherical wave; (d) phase map of the spherical wave; (e) reconstructed wavefront; (f) residual wavefront.

    Figure 9.Wavefront reconstruction results of the spherical wave. (a) Vectorial polarization distribution; (b) polarization spot array without wavefront distortion; (c) polarization spot array under the spherical wave; (d) phase map of the spherical wave; (e) reconstructed wavefront; (f) residual wavefront.

    In order to further examine the reliability and robustness of SPWS under complex conditions, we utilized the Kolmogorov turbulence model to construct 30 sets of wavefront distortions, which consisted of the first 36 orders of Zernike polynomials (excluding piston and tilt), and loaded them onto the SLM for wavefront sensing. Furthermore, in order to conduct a quantitative analysis and comparison of the wavefront reconstruction accuracy of SPWS, we employed a PC for the collection of intensity information from the spot array (restoring the polarization image to a gray-scale image). This is because the PC, in conjunction with the MLA, effectively constitutes a conventional SHWFS, and under the condition of sacrificing resolution, the PC can be used as a pixelated detector according to Eq. (4). Some of the results are shown in Fig. 10. Each set of results consists of five images, from left to right, respectively: the incident wavefront, the reconstructed wavefront of SHWFS, the residual wavefront of SHWFS, the reconstructed wavefront of SPWS, and the residual wavefront of SPWS. The RMS of SPWS’s residual wavefront among all random wavefront distortion reconstructions was 0.1583λ, while the proportion of RMS in the incident wavefront of SPWS was 24.87%. Incorporating a comprehensive analysis of all experimental outcomes, the mean RMS of the residual wavefront was 0.1746λ based on SHWFS, which was 7.92% of the average RMS of the incident wavefront. Additionally, the mean RMS of the residual wavefront, as measured by the SPWS, was 0.3009λ, representing 13.65% of the mean RMS of the incident wavefront.

    Wavefront reconstruction results of complex distortion. The incident wavefront is generated randomly, with different colors within the figures representing distinct phase values.

    Figure 10.Wavefront reconstruction results of complex distortion. The incident wavefront is generated randomly, with different colors within the figures representing distinct phase values.

    5. Discussions and Conclusions

    The experimental findings indicate that SPWS is capable of measuring and reconstructing complex wavefronts. However, due to the susceptibility to interference in polarization state measurements (such as changes in polarization state after traversing optical components) and the decreasing performance of RPD at the edge (although demonstrating a notable enhancement compared with OPD), the accuracy of SPWS was inferior to that of SHWFS under standard illumination conditions. Similar to the trade-off of a lower detection frame rate, this is also one of the inevitable costs associated with the adoption of the SD. From another perspective, the advantage of SDs lies in dealing with spectral bands and environments where pixelated detectors are ineffective and costly. Additionally, SPWS is expected to demonstrate its effectiveness in the aforementioned situations and in low signal-to-noise ratio detection, which was also the initial intention for SPWS.

    As discussed above, the current first-generation SPWS still presents the following issues.Linearly polarized incident light is currently a prerequisite for vectorial polarization modulation in SPWS, which increases the limitations on its applicability and simultaneously reduces the light energy utilization efficiency.The light spot traversal still relies on a two-dimensional translation stage, thus confining the detection to a static wavefront. The lower frame rate consequently impacts the sensing efficiency and accuracy of the dynamic wavefront.The applicability of SPWS under certain conditions may be inherently constrained by its own principles, such as in the case of partially coherent fields.

    In order to alleviate the impacts of the aforementioned issues on SPWS, further enhance the system’s practicality, and expand its range of applications, improvements could be sought in the following ways.Employing a VR array that matches the subapertures of the MLA to replace the existing VR, thereby improving issues such as low detection accuracy and insufficient consistency in the edge regions.Adopting devices with high refresh rates (such as the DMD) instead of the two-dimensional translation stage, while fully leveraging the bandwidth redundancy of SDs, to enhance the wavefront sensing frame rate of SPWS.Upon the introduction of a DMD, its wavefront splitting capability could be utilized to replace the MLA. Simultaneously, the ability to arbitrarily define the subaperture range presents further possibilities for SPWS to achieve a more flexible spatial frequency.

    References

    [1] R. Davies, M. Kasper. Adaptive optics for astronomy. Annu. Rev. Astron. Astrophys., 50, 305(2012).

    [2] P. Hickson. Atmospheric and adaptive optics. Astron. Astrophys. Rev., 22, 76(2014).

    [3] R. K. Tyson, B. W. Frazier. Principles of Adaptive Optics(2022).

    [4] P. S. Salter, M. J. Booth. Adaptive optics in laser processing. Light Sci. Appl., 8, 110(2019).

    [5] S. Marcos, J. S. Werner, S. A. Burns, W. H. Merigan, P. Artal, D. A. Atchison, K. M. Hampson, R. Legras, L. Lundstrom, G. Yoon, J. Carroll, S. S. Choi, N. Doble, A. M. Dubis, A. Dubra, A. Elsner, R. Jonnal, D. T. Miller, M. Paques, H. E. Smithson, L. K. Young, Y. Zhang, M. Campbell, J. Hunter, A. Metha, G. Palczewska, J. Schallek, L. C. Sincich. Vision science and adaptive optics, the state of the field. Vis. Res., 132, 3(2017).

    [6] J. A. Kubby. Adaptive Optics for Biological Imaging(2013).

    [7] J. Schwiegerling, D. R. Neal. Historical development of the Shack-Hartmann wavefront sensor. Robert Shannon and Roland Shack: Legends in Applied Optics, 132(2005).

    [8] T. Y. Chew, R. M. Clare, R. G. Lane. A comparison of the Shack–Hartmann and pyramid wavefront sensors. Opt. Commun., 268, 189(2006).

    [9] V. Akondi, A. Dubra. Shack-Hartmann wavefront sensor optical dynamic range. Opt. Express, 29, 8417(2021).

    [10] B. C. Platt, R. Shack. History and principles of Shack-Hartmann wavefront sensing. J. Refract. Surg., 17, 573(2013).

    [11] R. Ragazzoni. Dark wavefront sensing. Adaptive Optics for Extremely Large Telescopes 4–Conference Proceedings(2015).

    [12] Z.-Y. Zhu, D.-Y. Li, L.-F. Hu, Q.-Q. Mu, C.-L. Yang, Z.-L. Cao, L. Xuan. High signal-to-noise ratio sensing with Shack–Hartmann wavefront sensor based on auto gain control of electron multiplying CCD. Chin. Phys. B, 25, 090702(2016).

    [13] T. Sun, F. Xing, J. Bao, H. Zhan, Y. Han, G. Wang, S. Fu. Centroid determination based on energy flow information for moving dim point targets. Acta Astronaut., 192, 424(2022).

    [14] Z. Li, X. Li. Centroid computation for Shack-Hartmann wavefront sensor in extreme situations based on artificial neural networks. Opt. Express, 26, 31675(2018).

    [15] T. B. DuBose, D. F. Gardner, A. T. Watnik. Intensity-enhanced deep network wavefront reconstruction in Shack–Hartmann sensors. Opt. Lett., 45, 1699(2020).

    [16] C. Wang, L. Hu, H. Xu, Y. Wang, D. Li, S. Wang, Q. Mu, C. Yang, Z. Cao, X. Lu, L. Xuan. Wavefront detection method of a single-sensor based adaptive optics system. Opt. Express, 23, 21403(2015).

    [17] L. Hu, S. Hu, W. Gong, K. Si. Deep learning assisted Shack–Hartmann wavefront sensor for direct wavefront detection. Opt. Lett., 45, 3741(2020).

    [18] Z. Xu, S. Wang, M. Zhao, W. Zhao, L. Dong, X. He, P. Yang, B. Xu. Wavefront reconstruction of a Shack–Hartmann sensor with insufficient lenslets based on an extreme learning machine. Appl. Opt., 59, 4768(2020).

    [19] B. R. M. Norris, J. Wei, C. H. Betters, A. Wong, S. G. Leon-Saval. An all-photonic focal-plane wavefront sensor. Nat. Commun., 11, 5335(2020).

    [20] J.-L. Gach, P. Feautrier, E. Stadler, T. Greffe, F. Clop, S. Lemarchand, T. Carmignani, D. Boutolleau, I. Baker. C-RED one: ultra-high speed wavefront sensing in the infrared made possible. Proc. SPIE, 9909, 990913(2016).

    [21] C. Baranec, D. Atkinson, R. Riddle, D. Hall, S. Jacobson, N. M. Law, M. Chun. High-speed imaging and wavefront sensing with an infrared avalanche photodiode array. Astrophys. J., 809, 70(2015).

    [22] S. Kim, B. Cense, C. Joo. Single-pixel, single-input-state polarization-sensitive wavefront imaging. Opt. Lett., 45, 3965(2020).

    [23] A. D. Rodríguez, P. Clemente, E. Tajahuerce, J. Lancis. Dual-mode optical microscope based on single-pixel imaging. Opt. Lasers Eng., 82, 87(2016).

    [24] N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, M. J. Padgett. Single-pixel infrared and visible microscope. Optica, 1, 285(2014).

    [25] L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, Q. Dai. Multispectral imaging using a single bucket detector. Sci. Rep., 6, 24752(2016).

    [26] V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, M. Dahan. Compressive fluorescence microscopy for biological and hyperspectral imaging. Proc. Natl. Acad. Sci., 109, E1679(2012).

    [27] F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, J. Lancis. Single-pixel polarimetric imaging spectrometer by compressive sensing. Appl. Phys. B, 113, 551(2013).

    [28] Y. Chen, K. Yin, D. Shi, W. Yang, J. Huang, Z. Guo, K. Yuan, Y. Wang. Detection and imaging of distant targets by near-infrared polarization single-pixel lidar. Appl. Opt., 61, 6905(2022).

    [29] K. L. C. Seow, P. Török, M. R. Foreman. Single pixel polarimetric imaging through scattering media. Opt. Lett., 45, 5740(2020).

    [30] S. S. Welsh, M. P. Edgar, R. Bowman, B. Sun, M. J. Padgett. Near video-rate linear Stokes imaging with single-pixel detectors. J. Opt., 17, 025705(2015).

    [31] M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, M. J. Padgett. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun., 7, 12010(2016).

    [32] P. Kilcullen, C. Jiang, T. Ozaki, J. Liang. Camera-free three-dimensional dual photography. Opt. Express, 28, 29377(2020).

    [33] J. Teng, Q. Guo, M. Chen, S. Yang, H. Chen. Time-encoded single-pixel 3D imaging. APL Photonics, 5, 020801(2020).

    [34] N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, B. Cox. Single-pixel optical camera for video rate ultrasonic imaging. Optica, 3, 26(2016).

    [35] N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, B. Cox. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging. Proc. SPIE, 9323, 932310(2015).

    [36] R. Liu, S. Zhao, P. Zhang, H. Gao, F. Li. Complex wavefront reconstruction with single-pixel detector. Appl. Phys. Lett., 114, 161901(2019).

    [37] M. A. Cox, E. Toninelli, L. Cheng, M. J. Padgett, A. Forbes. A high-speed, wavelength invariant, single-pixel wavefront sensor with a digital micromirror device. IEEE Access, 7, 85860(2019).

    [38] S. Sun, W. Zhao, A. Zhai, D. Wang. DCT single-pixel detecting for wavefront measurement. Opt. Laser Technol., 163, 109326(2023).

    [39] M. Rani, S. B. Dhok, R. B. Deshmukh. A systematic review of compressive sensing: concepts, implementations and applications. IEEE Access, 6, 4875(2018).

    [40] M. Fornasier, H. Rauhut. Compressive sensing. Handbook of Mathematical Methods in Imaging, 187(2015).

    [41] L. Gao, W. Zhao, A. Zhai, D. Wang. OAM-basis wavefront single-pixel imaging via compressed sensing. J. Lightwave Technol., 41, 2131(2023).

    [42] S. C. McEldowney, D. M. Shemo, R. A. Chipman. Vortex retarders produced from photo-aligned liquid crystal polymers. Opt. Express, 16, 7295(2008).

    [43] S. Huang, S. Luo, Y. Yang, T. Li, Y. Wu, Q. Zeng, H. Huang. Determination of optical rotation based on liquid crystal polymer vortex retarder and digital image processing. IEEE Access, 10, 8219(2022).

    [44] J. Chen, C. Wan, Q. Zhan. Vectorial optical fields: recent advances and future prospects. Sci. Bull., 63, 54(2018).

    [45] N. Radwell, R. Hawley, J. Götte, S. Franke-Arnold. Achromatic vector vortex beams from a glass cone. Nat. Commun., 7, 10564(2016).

    [46] W. Zhang, X. Zhang, Y. Cao, H. Liu, Z. Liu. Robust sky light polarization detection with an S-wave plate in a light field camera. Appl. Opt., 55, 3518(2016).

    [47] A. Perea, J. Castellano, L. Alday, A. Hernández-Mendo. Analysis of behaviour in sports through polar coordinate analysis with MATLAB®. Qual. Quant., 46, 1249(2012).

    [48] W. Zhang, Y. Cao, X. Zhang, Y. Yang, Y. Ning. Angle of sky light polarization derived from digital images of the sky under various conditions. Appl. Opt., 56, 587(2017).

    Wunan Li, Yu Cao, Yu Ning, Fengjie Xi, Quan Sun, Xiaojun Xu. Single-pixel wavefront sensing via vectorial polarization modulation [Invited][J]. Chinese Optics Letters, 2023, 21(9): 090008
    Download Citation