• Photonics Research
  • Vol. 13, Issue 5, 1148 (2025)
Shiyun Zhou1,2,3, Lang Li1,2,3, Yishu Wang1,2,3, Liliang Gao1,2,3..., Zhichao Zhang1,2,3, Chunqing Gao1,2,3,4 and Shiyao Fu1,2,3,4,*|Show fewer author(s)
Author Affiliations
  • 1School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
  • 2Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, Beijing 100081, China
  • 3Key Laboratory of Information Photonics Technology, Ministry of Industry and Information Technology, Beijing 100081, China
  • 4National Key Laboratory on Near-surface Detection, Beijing 100072, China
  • show less
    DOI: 10.1364/PRJ.550470 Cite this Article Set citation alerts
    Shiyun Zhou, Lang Li, Yishu Wang, Liliang Gao, Zhichao Zhang, Chunqing Gao, Shiyao Fu, "Intelligent tailoring of a broadband orbital angular momentum comb towards efficient optical convolution," Photonics Res. 13, 1148 (2025) Copy Citation Text show less

    Abstract

    Due to the high-dimensional characteristics of photon orbital angular momentum (OAM), a beam can carry multiple OAMs simultaneously thus forming an OAM comb, which has been proved to show significant potential in both classical and quantum photonics. Tailoring broadband OAM combs on demand in a fast and accurate manner is a crucial basis for their application in advanced scenarios. However, obtaining phase-only gratings for the generation of arbitrary desired OAM combs still poses challenges. In this paper, we propose a multi-scale fusion learning U-shaped neural network that encodes a phase-only hologram for tailoring broadband OAM combs on-demand. Proof-of-principle experiments demonstrate that our scheme achieves fast computational speed, high modulation precision, and high manipulation dimensionality, with a mode range of -75 to +75, an average root mean square error of 0.0037, and a fidelity of 85.01%, all achieved in about 30 ms. Furthermore, we utilize the tailored broadband OAM combs in conducting optical convolution calculation, enabling vector convolution for arbitrary discrete functions, showcasing the extended capability of our proposal. This work opens, to our knowledge, new insight for on-demand tailoring of broadband OAM combs, paving the way for further advancements in high-dimensional OAM-based applications.

    1. INTRODUCTION

    Vortex beams, endowed with orbital angular momentum (OAM), have captivated researchers for several decades. Pioneering work by Allen et al. in 1992 demonstrated that these beams [1] are characterized by an azimuthal phase term exp(ilφ) with l the topological charge and φ the azimuthal angle, inspiring plenty of applications across diverse domains in both classical and quantum physics [28]. Capitalizing on the inherent high-dimensional orthogonality among OAM modes, a beam can carry a series of discrete, selective interval and power weight OAM modes, which are termed as OAM combs [9,10]. Such OAM comb offers significant potential for the transmission of vast amounts of information [1114] and serves as a flexible key for photonics computing [1518].

    Previous research has introduced various schemes for generating OAM modes, where a single element, such as spiral phase plate or spatial light modulator, typically provides only one OAM mode, limiting scalability [1921]. As the number of multiplexed OAM modes increases, the cost and complexity will rapidly grow and with the resultant required multiple elements [2224], it is highly desirable to generate a large number of OAM modes in a simple, scalable, and cost-efficient way, leading to a consensus on the need for a phase-only modulation of OAM combs’ on-demand tailoring [11,25,26]. Different from generating single-mode OAM beams directly, creating OAM combs solely through superposing spiral phases is not feasible due to the inevitable mode intensity loss in phase-only modulation [27]. Current schemes, such as mode iteration [27], genetic algorithms [28], pattern-search strategies [29], and adaptive modification [30], have been employed to address this challenge. However, these schemes still face troubles such as initial set dependency, long iteration time, and uncertain convergence.

    Inspired by deep neural networks with powerful abilities in extracting high-dimensional features [3133], there is significant potential to establish an intelligent, data-driven framework for optimizing phase designs [3436]. In this paper, we propose a multi-scale fusion learning U-shaped neural network (MSUNet) for on-demand tailoring of broadband OAM combs within a phase-only hologram. Our approach extracts high-dimensional features from the target OAM comb and compensates for mode intensity loss through multi-scale learning at the latent feature space, aiming to generate a high-dimensional OAM hologram. Notably, there are no ground-truth holograms employed in the training; instead, the scalar diffraction is incorporated in network design to calculate the modulated optical field, enabling an analysis of the OAM spectrum, forming a loss constraint. By this means, the issue of lacking ground-truth in supervised training is addressed, thus enhancing the network interpretability. Proof-of-principle experiments demonstrate that our proposal significantly achieves faster computational speed, higher modulation precision, and higher manipulation dimensionality, with a mode range of 75 to +75, an average root mean square error (RMSE) of 0.0037, and a fidelity of 85.01%, all achieved in about 30 ms. Furthermore, we utilize broadband OAM combs in optical convolution calculation, enabling vector convolution for arbitrary discrete functions with an accuracy of 0.0138 in RMSE and 82.82% in fidelity, showcasing the extended capability of our proposal. This study introduces a simple, fast, accurate, and scalable approach for on-demand broadband OAM comb tailoring, contributing to a wide range of advanced OAM-based technologies and applications.

    2. CONCEPT OVERVIEW

    Figure 1 presents a comprehensive conceptual framework for the intelligent tailoring of a broadband OAM comb and its application in optical convolution. This framework enables on-demand customization of OAM comb settings, including mode range, interval, and distribution, to form a target OAM spectrum. The corresponding target intensity and phase patterns of the target OAM comb serve as input data for the neural network training. The proposed MSUNet architecture consists of two main processes: feature extraction and feature fusion. In the feature extraction process, the input data patterns pass through a feature extractor, transforming into high-dimensional features, resulting in a feature map. Subsequently, feature fusion is performed using a multi-scale scheme. By weighting the high-dimensional intensity and phase features at multiple scales, the mode intensity loss caused by phase-only modulation can be compensated for, thereby refining the superposed hologram. Since there are no standard holograms available as ground-truth for training, we incorporate the physical process of scalar diffraction to calculate the optical field modulated by the output refined phase-only hologram, enabling an analysis of the OAM spectrum through a pre-trained deep residual network (DRN) [37]. The difference between this output OAM comb and the target is used as a constraint for back-propagation, addressing the lack of ground-truth in supervised training, enhancing the network interpretability, and achieving reliable network training.

    Concept of intelligent tailoring of OAM comb. The on-demand customization of OAM comb settings includes mode range, interval, and distribution, forming a target OAM comb. The corresponding complex-amplitude patterns serve as input data for the neural network training. The overall workflow of the proposed intelligent tailoring scheme consists of a feature extraction structure and a feature fusion structure, enabling the generation of a phase-only hologram thus tailoring the target OAM comb. The difference between output and target OAM spectrum is used as a constraint for loss backward in network training. This proposal can also be employed to conduct optical convolution calculation. For any arbitrary OAM combs F(l) and G(l), their convolution result can be easily obtained by solely detecting the OAM spectrum of their phase-only holograms’ superposed diffraction field.

    Figure 1.Concept of intelligent tailoring of OAM comb. The on-demand customization of OAM comb settings includes mode range, interval, and distribution, forming a target OAM comb. The corresponding complex-amplitude patterns serve as input data for the neural network training. The overall workflow of the proposed intelligent tailoring scheme consists of a feature extraction structure and a feature fusion structure, enabling the generation of a phase-only hologram thus tailoring the target OAM comb. The difference between output and target OAM spectrum is used as a constraint for loss backward in network training. This proposal can also be employed to conduct optical convolution calculation. For any arbitrary OAM combs F(l) and G(l), their convolution result can be easily obtained by solely detecting the OAM spectrum of their phase-only holograms’ superposed diffraction field.

    Furthermore, such straightforward generation of OAM combs enables the application in optical convolution. The harmonic coefficients of the OAM comb, such as the OAM spectrum, can also be treated as discrete functions. Analogous to Fourier transformation, the vector convolution of any two discrete functions in the OAM domain is equivalent to the spectra of the product of their optical fields in the spatial domain. Therefore, the convolution of any two discrete functions [F(l) and G(l)] can be obtained by measuring the spectrum of their optical field product. Given the advantages of a phase-only nature, the modulation result of the product of their optical fields can be achieved by simply adding the respective phase-only hologram of the OAM combs. This means that the convolution results of the two OAM combs [F(l)*G(l)] can be easily obtained by detecting the OAM spectrum of the final diffraction field, eliminating the need for complex and expensive convolution calculations. Notably, the calculation speed is at the speed of light. In other words, the proposed OAM comb tailoring scheme provides an efficient path for optical convolution calculation.

    3. MSUNet TOWARDS BROADBAND OAM COMB TAILORING

    Figure 2(a) illustrates the feature extraction workflow, which mainly involves a feature extractor to transform the input target intensity and phase patterns into easily separable high-dimensional features at the latent feature space. Figure 2(b) details the structure of a U-shaped neural network [38], which serves as a feature extractor. This architecture consists of four sets of 2×2 max-pooling layers and four sets of 2×2 up-sampling layers. Each set of pooling and up-sampling layers is preceded by two 3×3 convolutional layers with a rectified linear unit (ReLU) as the activation function, which is used to introduce nonlinear transformations in the feature extraction process. Copy and crop operations are employed between sampling layers with the same spatial dimensions in order to mitigate the issue of gradient explosion that can arise from deep feature representations. Finally, a 1×1 convolutional layer is used to adjust the spatial dimensions. After the feature extraction process, a feature map is obtained, which represents the overall high-dimensional features of the input target OAM comb. In order to compensate for the mode intensity loss resulting from phase-only modulation, we introduced a multi-scale scheme that combines the target intensity and phase pattern feature maps by weighting the high-dimensional features at multiple scales; thus the superposed phases can be refined into a phase-only hologram for the target OAM comb. The multi-scale scheme [39] is expressed as follows: ϕholo=αi=1mpi·funet(I)+βi=1mqi·funet(P),where ϕholo represents the phase-only hologram; α and β are hyper-parameters initially set as α=0.2 and β=0.8, respectively. This initial values choice reflects a strategy to prioritize phase reconstruction while still considering intensity information, which is further optimized during network training. p and q denote scale factors, m is the number of scaling layers, and funet(·) refers to the feature extractor. I and P represent the input intensity and phase pattern data, respectively.

    Structure of MSUNet. (a) The feature extraction workflow. (b) The U-shaped neural network structure, which serves as the feature extractor for feature extraction process. (c) Details of the feature fusion process for generating phase-only hologram that incorporates angular spectrum transmission and a pre-trained DRN model to analyze the OAM spectrum for loss backward propagation. (d) The multi-scale scheme, which plays a crucial role in feature fusion for generating the phase-only hologram.

    Figure 2.Structure of MSUNet. (a) The feature extraction workflow. (b) The U-shaped neural network structure, which serves as the feature extractor for feature extraction process. (c) Details of the feature fusion process for generating phase-only hologram that incorporates angular spectrum transmission and a pre-trained DRN model to analyze the OAM spectrum for loss backward propagation. (d) The multi-scale scheme, which plays a crucial role in feature fusion for generating the phase-only hologram.

    Figure 2(c) displays the fusion process of intensity and phase high-dimensional features. Through the weighted combination of different-sized convolutionally sampled feature maps at varying scales, the phase-only hologram that includes intensity information compensation can be obtained. Specifically, as illustrated in Fig. 2(d), the multi-scale scheme includes three 2×2 up-sampling layers, three 2×2 down-sampling layers, and four 1×1 convolutional layers. This module performs multi-scale feature extraction utilizing up-sampling layers to construct a feature pyramid, followed by spatial dimension transformation through the down-sampling layers. The final step involves feature fusion across multiple scales using the 1×1 convolutional layers, which combine the features through weighted summation as a fusion output. Considering that there are no standard holograms available as ground-truth for training, we introduce the physical process of scalar diffraction, angular spectrum transmission, to obtain the output modulated optical field and then use a pre-trained DRN to analyze its output OAM spectra as a loss constraint. Such process reads {|al|2}=fDRN(fAS(ϕholo)),where al corresponds to the amplitude of OAM mode |l, thus {|al|2}, forming the OAM spectrum. The function fDRN(·) refers to the process of the pre-trained DRN model and fAS(·) represents the angular spectrum transmission. The difference between this output and the target OAM comb is used as a constraint for loss backward, addressing the lack of ground-truth in supervised training, enhancing the network interpretability, and achieving reliable network training. The RMSE is employed as a loss function to evaluate the difference: Loss=LRMSE({|al|2},{|al|2}^)=12n+1l=n+n({|al|2}{|al|2}^)2,with n the OAM mode range (see more details in Appendix A).

    4. RESULTS AND DISCUSSION

    A. Experiments of OAM Comb Intelligent Tailoring

    Proof-of-principle experiments are carried out to evaluate the performance of the proposed MSUNet for on-demand broadband OAM comb tailoring and its applications in optical convolution. The experimental setup is sketched in Fig. 3(a). A 1617 nm Gaussian beam emitted from a distributed feedback laser diode passes a half-wave plate (HWP) and a polarized beam splitter (PBS) in sequence, so as to obtain horizontally linear polarization to match the demand of phase-only modulation of the spatial light modulator (SLM, Holoeye, PLUTO-TELCO-013-C). The phase-only hologram obtained from MSUNet is encoded onto the SLM to generate the target OAM comb. After passing through a plano-convex lens with focal length 200 mm, the diffraction field is collected by an infrared CCD camera for subsequent OAM spectrum analysis.

    Experimental OAM comb intelligent tailoring. (a) The experimental setup. DFB, a 1617 nm distributed feedback laser diode; SMF, single-mode fiber; Col., collimator; HWP, half-wave plate; PBS, polarized beam splitter; SLM, liquid-crystal spatial light modulator; L, plano-convex lens with focal length 200 mm; CCD, infrared CCD camera. (b), (c) Phase-only holograms of broadband OAM combs with a minimum mode interval 1, and mode range of −75 to +75, respectively, along with visualizations of the simulated and experimental intensity distributions of the optical fields modulated by the phase-only hologram. (d), (f) OAM spectrum measurement results corresponding to the OAM comb in (b) and (c), with the target and experimental intensity distributions represented by blue and red bars, respectively. (e), (g) Density matrix difference between the target OAM comb and the experimental OAM comb from (b) and (c). The values close to zero indicate the high fidelities of the experimentally generated OAM combs.

    Figure 3.Experimental OAM comb intelligent tailoring. (a) The experimental setup. DFB, a 1617 nm distributed feedback laser diode; SMF, single-mode fiber; Col., collimator; HWP, half-wave plate; PBS, polarized beam splitter; SLM, liquid-crystal spatial light modulator; L, plano-convex lens with focal length 200 mm; CCD, infrared CCD camera. (b), (c) Phase-only holograms of broadband OAM combs with a minimum mode interval 1, and mode range of 75 to +75, respectively, along with visualizations of the simulated and experimental intensity distributions of the optical fields modulated by the phase-only hologram. (d), (f) OAM spectrum measurement results corresponding to the OAM comb in (b) and (c), with the target and experimental intensity distributions represented by blue and red bars, respectively. (e), (g) Density matrix difference between the target OAM comb and the experimental OAM comb from (b) and (c). The values close to zero indicate the high fidelities of the experimentally generated OAM combs.

    Specifically, for tailoring OAM combs, we first input the target OAM comb intensity and phase patterns into the trained MSUNet on a computer to obtain the encoded phase-only hologram. This hologram is then encoded onto the SLM, and the intensity distribution of modulated optical fields can be observed at the CCD. For OAM spectrum measurement, due to the phase-only property, we can directly superpose the anti-spiral phase onto the hologram, resulting in a series of holograms with anti-spiral phases. By sequentially switching these holograms on the SLM, OAM spectrum measurement can be conducted (see more details in Appendix B). Similarly, for the efficient optical convolution, we can directly superpose the holograms from MSUNet for F(l) and G(l) to obtain the product optical field of f(φ) and g(φ). The same approach applies to OAM spectrum measurement. By sequentially superposing anti-spiral phases onto the superposed hologram and switching them in sequence on the SLM, the OAM spectra can be measured.

    Considering that broadband OAM combs involve a large number of superposed OAM modes, after normalization, the absolute values of the relative intensity for each mode are small. RMSE may appear low due to these inherently small amplitude values, while this does not affect its effectiveness in training and reflecting results. However, it may not be particularly intuitive when presenting results. Therefore, we introduced an additional metric to provide a more straightforward evaluation of performance, while still including RMSE.

    Given that evaluating multiple-mode OAM beams through mode purity is not feasible, we introduced the concept of a density matrix, analogous to concepts in quantum optics, to assess the quality of the output OAM comb. An OAM comb |Ψ could simultaneously contain multiple OAM states as |Ψ=l=n+nal|l,where |l represents the OAM state. Thus, the density matrix ρ for the OAM comb is defined as ρ=|ΨΨ|. By measuring the density matrix difference between the target and the experimental OAM combs, the similarity between the two OAM combs can be assessed—a smaller numerical difference indicates higher modulation quality. For better verification and quantitative evaluation, the fidelity F, which illustrates the similarity between the experimental result and the corresponding target OAM comb, is exploited here. The higher the fidelity, the better the alignment between two states. The fidelity can be calculated through the density matrix as F=Ψtar|ρexp|Ψtar.

    As an illustrative example, Fig. 3(b) shows the phase-only hologram and corresponding visualizations of the simulated and experimental optical field patterns for a 17-mode OAM comb with a minimum mode interval of one. Its OAM spectrum is measured and provided in Fig. 3(d), which displays the target OAM spectrum (blue bars) and the experimentally generated OAM spectrum (red bars). The corresponding density matrix difference is shown in Fig. 3(e). The MSUNet accurately tailors the OAM comb, achieving an RMSE of 0.0029 and a fidelity of 84.53%, within a testing time of 29 ms. Similarly, Fig. 3(c) presents the phase-only hologram and visualizations for a 36-mode OAM comb crossing mode range of 75 to +75. The measured OAM spectrum is shown in Fig. 3(f) and the corresponding density matrix difference is given in Fig. 3(g). For this case, the MSUNet achieves an RMSE of 0.0028 and a fidelity of 83.63%, within a testing time of 33 ms. All the experimental results align closely with simulations, demonstrating the accuracy of the MSUNet in tailoring broadband OAM combs. These numerical results strongly support the effectiveness of our proposal (see more details in Appendix C).

    B. Optical Convolution

    Since our proposal is available for tailoring broadband OAM combs that represent any discrete functions, we can further utilize OAM combs towards optical convolution calculation. Convolution is a fundamental mathematical operation widely used in signal and image processing. It produces a new function from two original functions, representing how a function is modified by the other. For OAM combs, similar to the product of functions in the spatial domain corresponding to the convolution of their Fourier coefficients, the OAM comb, with its rotational symmetry, can be expanded by Fourier transformation on azimuth into helical harmonics. In spherical coordinates, this expansion is represented by the superposition of multiple helical harmonic functions, forming the OAM spectrum [40]. Consequently, the optical fields product of OAM combs in the spatial domain corresponds to the convolution of the harmonic coefficients in the helical harmonic domain, where the OAM spectrum convolution is denoted as F(l)*G(l)=H(f(φ)·g(φ)),where H(·) denotes the helical harmonic transformation, F(l) and G(l) are the OAM spectra of OAM combs f(φ) and g(φ), respectively, and H(f(φ)·g(φ)) represents the OAM spectra of the optical field product f(φ)·g(φ) (see more details in Appendix D).

    As illustrated in Fig. 4(a), for arbitrary functions (OAM combs) F(l) and G(l), the MSUNet can generate corresponding holograms ϕf and ϕg accurately. When a Gaussian beam passes through such two holograms in sequence, a diffraction field can be obtained, which is the product in spatial domain f(φ)·g(φ) and is actually a new OAM comb. According to the aforementioned OAM convolution theory, the convolution result of F(l) and G(l) is the helical harmonic transformation of f(φ)·g(φ), namely, the OAM spectrum of the OAM comb.

    Highly efficient optical convolution employing OAM combs. (a) Overview of the OAM-comb-based optical convolution. (b) OAM comb F(l). (c) OAM comb G(l). (d) Results of the convolution calculation of the two OAM combs, F(l)*G(l).

    Figure 4.Highly efficient optical convolution employing OAM combs. (a) Overview of the OAM-comb-based optical convolution. (b) OAM comb F(l). (c) OAM comb G(l). (d) Results of the convolution calculation of the two OAM combs, F(l)*G(l).

    Proof-of-principle experiments confirm the practical operability, with example results shown in Figs. 4(b)–4(d). Figures 4(b) and 4(c) depict the original functions, OAM comb F(l) and OAM comb G(l), respectively. Instead of performing complex mathematic convolution calculations, our scheme provides a more efficient approach by simply measuring the OAM spectrum of a diffracted field that is modulated by the superposing holograms from the two original OAM combs to obtain the result. Figure 4(d) shows the experimental measurement results of the diffracted field and the vector convolution results of the two OAM combs [F(l)*G(l)], with an RMSE of 0.0138 and fidelity of 82.82%. Such promising results proved that our approach significantly reduces the complexity and cost of convolution operation, highlighting its potential for extension to the advanced OAM-based applications.

    5. CONCLUSION

    This work presents an intelligent and efficient approach for on-demand tailoring of broadband OAM combs within a phase-only hologram, demonstrating in a highly efficient optical convolution. Our scheme overcomes the key limitations such as mode intensity loss in phase-only modulation of multiplexed OAM beams, enabling on-demand tailoring of further broadband combs with faster computational speed, higher modulation precision, and higher manipulation dimensionality. Proof-of-principle experiments, covering a variety of mode intervals and numbers of OAM modes ranging from 75 to 75, validate the effectiveness of our proposal, reaching an average RMSE of 0.0037 and fidelity of 85.01%. The whole process takes about 30 ms. Moreover, we utilize the broadband OAM combs in optical convolution operations, obtaining an accuracy of 0.0138 in RMSE and fidelity of 82.82%, demonstrating the extended capability of our proposal. The proposed approach, characterized by its simplicity, speed, accuracy, and scalability, opens new avenues for advanced applications of high-dimensional OAM beams in optical computing, laser manipulation, and other photonics-based technologies, paving the way for future innovations in the field.

    APPENDIX A: MSUNet TRAINING DETAILS

    The training dataset for MSUNet consists of simulated data whose parameters are identical to the experiment, ranging from 75 to 75 with variable OAM mode intervals and amplitude distributions. The data resolution matches that of the liquid crystal spatial light modulator in the experiment, configured at 1080×1080 pixels, with a physical size of 10  mm×10  mm, resulting in 108 DPI (dots per inch). Each training data pair includes a target OAM spectrum and the corresponding intensity and phase patterns, totaling 20,086 pairs. The data are split into training and testing sets in an 8:2 ratio. During the training process, the Adam optimizer is used to optimize gradients and momentum, with a learning rate of 0.001 and batch size of 8, over 800 epochs of iteration. An early stopping callback is also employed to monitor the training process and prevent overfitting. The MSUNet was undergoing training and testing experiments within the PyTorch framework in Python3.8 on a workstation with four NVIDIA RTX A6000 graphical processing units (GPUs), Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20 GHz and 192 GB of RAM, running the Ubuntu 20.04.6 LTS operating system. The training process takes over 130 h. Several factors may influence the prediction accuracy of the model after training. Firstly, data resolution plays a crucial role: higher resolution data capture finer details, which enhances the model’s ability to handle complex scenarios and improves prediction accuracy. However, this comes at the cost of increased computational power and longer training time. Secondly, the model interval and the number of modes used in the dataset can significantly impact the model’s generalization ability. Lastly, the dataset size is also a critical factor. A larger dataset enables the model to learn more robust features, leading to better generalization. However, increasing the dataset size also requires more computational resources and longer training times. Balancing these factors is essential for achieving optimal model performance while managing the trade-offs in computational cost and training efficiency.

    APPENDIX B: OAM SPECTRUM ANALYSIS

    The OAM spectra are analyzed through OAM back converting, where a series of spiral phases (l1,l2,,lN), also known as holographic anti-SPPs, are encoded simultaneously on the SLM. The intensity of the central bright spot is calculated to represent the relative intensity of the OAM channel l1,l2,,lN [41]. The principle of back conversion can be explained as follows: if a l-th order spiral phase is encoded, the OAM state l in the OAM comb turns into ll=0, resulting in a bright spot at the beam center. However, the other OAM state lo turns into lol0, and will not concentrate to the center. Therefore, by measuring the intensity of the bright spot in each back-converted pattern, the OAM spectrum can be determined.

    Specifically, we first generate an OAM comb through the MSUNet output hologram. Because of the phase-only modulation, it can be superposed directly with spiral phases within the setting mode range, here from 75 to 75, which are then encoded directly onto the SLM in the aforementioned optical setup, finally employing an infrared CCD camera to capture the corresponding optical field. Figure 5 shows some of the experimentally captured back-converted patterns for the generated OAM comb, with the corresponding OAM channels l=[71,64,57,50,36,29,15,8,1,13,20,27,41,48,55,62,69], and the orders of the back-converting spiral phases labeled. In each inset, the orange dashed circle indicates the sampling area, where the intensities inside it are regarded as the back-converted OAM channel. It is evident that the intensity in the central region of the graph, corresponding to the back-converted spiral phases l=1 and l=8, is not zero, while the other is almost zero. This indicates that the measured OAM comb contains l=1 and l=8.

    Experimentally captured back-converted patterns for the generated OAM combs. The orders of the back-converting spiral phases are labeled at the top left corner of each inset. The orange dashed circle represents the sampling area, where intensities inside it are regarded as the back-converted OAM channel.

    Figure 5.Experimentally captured back-converted patterns for the generated OAM combs. The orders of the back-converting spiral phases are labeled at the top left corner of each inset. The orange dashed circle represents the sampling area, where intensities inside it are regarded as the back-converted OAM channel.

    Results of the spectrum measurement before and after calibration. The target OAM comb, and the experimental OAM spectrum before and after refining by the calibration curve, are represented by blue, red, and green bars, respectively. The calibration curve, measured using a standard spiral phase pair, is shown in yellow.

    Figure 6.Results of the spectrum measurement before and after calibration. The target OAM comb, and the experimental OAM spectrum before and after refining by the calibration curve, are represented by blue, red, and green bars, respectively. The calibration curve, measured using a standard spiral phase pair, is shown in yellow.

    APPENDIX C: EXTENDED EXPERIMENTAL RESULTS

    Our proposal for intelligent tailoring of OAM combs supports the OAM comb settings with OAM mode ranging from 75 to +75, varying mode intervals, and mode numbers from 2 to 40. Extended experimental results are shown in Fig. 7, which evaluates the quality of OAM combs with mode numbers of 5, 10, 15, 20, 25, 30, 35, and 40.

    Extended experimental results of various OAM mode numbers. (a) 5 modes (RMSE = 0.0041, fidelity = 91.33%), (b) 10 modes (RMSE = 0.0033, fidelity = 93.44%), (c) 15 modes (RMSE = 0.0034, fidelity = 84.80%), (d) 20 modes (RMSE = 0.0025, fidelity = 84.43%), (e) 25 modes (RMSE = 0.0030, fidelity = 81.79%), (f) 30 modes (RMSE = 0.0038, fidelity = 81.60%), (g) 35 modes (RMSE = 0.0058, fidelity = 81.19%), and (h) 40 modes (RMSE = 0.0037, fidelity = 81.53%).

    Figure 7.Extended experimental results of various OAM mode numbers. (a) 5 modes (RMSE = 0.0041, fidelity = 91.33%), (b) 10 modes (RMSE = 0.0033, fidelity = 93.44%), (c) 15 modes (RMSE = 0.0034, fidelity = 84.80%), (d) 20 modes (RMSE = 0.0025, fidelity = 84.43%), (e) 25 modes (RMSE = 0.0030, fidelity = 81.79%), (f) 30 modes (RMSE = 0.0038, fidelity = 81.60%), (g) 35 modes (RMSE = 0.0058, fidelity = 81.19%), and (h) 40 modes (RMSE = 0.0037, fidelity = 81.53%).

    Variation of RMSE and fidelity with increasing number of modes in an OAM comb. The blue line represents the variation of RMSE with the increasing number of OAM modes, where a lower RMSE indicates better performance. The orange line illustrates the variation of fidelity with the increasing number of OAM modes, where higher fidelity reflects better accuracy.

    Figure 8.Variation of RMSE and fidelity with increasing number of modes in an OAM comb. The blue line represents the variation of RMSE with the increasing number of OAM modes, where a lower RMSE indicates better performance. The orange line illustrates the variation of fidelity with the increasing number of OAM modes, where higher fidelity reflects better accuracy.

    APPENDIX D: DERIVATION OF OPTICAL CONVOLUTION FOR OAM COMBS

    Consider two OAM combs in the spatial domain, f(φ) and g(φ), which can be expanded into helical harmonics as f(φ)=l1=F(l1)·exp(il1φ),g(φ)=l2=G(l2)·exp(il2φ),where F(l) and G(l) are the harmonic coefficients, representing the OAM spectra of the OAM combs f(φ) and g(φ). The product of these optical fields u(φ)=f(φ)·g(φ) can be expressed as the sum of the products of multiple spiral harmonic functions: u(φ)=(l1=F(l1)·exp(il1φ))·(l2=G(l2)·exp(il2φ))=l1=l2=F(l1)·G(l2)·exp(il1φ)·exp(il2φ).

    Due to the orthogonality of the helical harmonic functions, the product of two helical harmonic functions can be represented as a linear combination of other harmonics: exp(il1φ)·exp(il2φ)=l=C·exp(ilφ).

    Substituting Eq. (D4) into Eq. (D3), we obtain u(φ)=l=(l1=l2=F(l1)·G(l2))·C·exp(ilφ)=l=U(l)·exp(ilφ).

    Equation (D5) shows that the helical harmonic coefficients U(l) of the product field u(φ) are the convolution of F(l1) and G(l2). Thus the convolution result of two OAM combs is equal to the OAM spectrum of their product field: F(l)*G(l)=H(f(φ)·g(φ)),where H represents the helical harmonic transformation.

    References

    [1] L. Allen, M. W. Beijersbergen, R. Spreeuw. Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes. Phys. Rev. A, 45, 8185-8189(1992).

    [2] M. P. Lavery, F. C. Speirits, S. M. Barnett. Detection of a spinning object using light’s orbital angular momentum. Science, 341, 537-540(2013).

    [3] M. Padgett, R. Bowman. Tweezers with a twist. Nat. Photonics, 5, 343-348(2011).

    [4] Z. Shi, Z. Wan, Z. Zhan. Super-resolution orbital angular momentum holography. Nat. Commun., 14, 1869(2023).

    [5] A. S. Maxwell, L. B. Madsen, M. Lewenstein. Entanglement of orbital angular momentum in non-sequential double ionization. Nat. Commun., 13, 4706(2022).

    [6] S. Liu, Y. Lou, J. Jing. Orbital angular momentum multiplexed deterministic all-optical quantum teleportation. Nat. Commun., 11, 3875(2020).

    [7] Z.-Y. Zhou, Y. Li, D. S. Ding. Orbital angular momentum photonic quantum interface. Light Sci. Appl., 5, e16019(2016).

    [8] A. Suprano, D. Zia, M. Pont. Orbital angular momentum based intra- and interparticle entangled states generated via a quantum dot source. Adv. Photon., 5, 046008(2023).

    [9] S. Fu, Z. Shang, L. Hai. Orbital angular momentum comb generation from azimuthal binary phases. Adv. Photon. Nexus, 1, 016003(2022).

    [10] J. Wang. High-dimensional orbital angular momentum comb. Adv. Photon., 4, 050501(2022).

    [11] J. Wang, J. Y. Yang, I. M. Fazal. Terabit free-space data transmission employing orbital angular momentum multiplexing. Nat. Photonics, 6, 488-496(2012).

    [12] X. Fang, H. Ren, M. Gu. Orbital angular momentum holography for high-security encryption. Nat. Photonics, 14, 102-108(2020).

    [13] J. Zhang, J. Liu, L. Shen. Mode-division multiplexed transmission of wavelength-division multiplexing signals over a 100-km single-span orbital angular momentum fiber. Photon. Res., 8, 1236-1242(2020).

    [14] Z. Shang, S. Fu, L. Hai. Multiplexed vortex state array toward high-dimensional data multicasting. Opt. Express, 30, 34053-34063(2022).

    [15] J. Zhang, P. Li, R. C. Cheung. Generation of time-varying orbital angular momentum beams with space-time-coding digital metasurface. Adv. Photon., 5, 036001(2023).

    [16] L. Hong, H. Guo, X. Qiu. Experimental optical computing of complex vector convolution with twisted light. Adv. Photon. Nexus, 2, 046008(2023).

    [17] X. Fang, X. Hu, B. Li. Orbital angular momentum-mediated machine learning for high-accuracy mode-feature encoding. Light Sci. Appl., 13, 49(2024).

    [18] Y. Liu, C. Lao, M. Wang. Integrated vortex soliton microcombs. Nat. Photonics, 18, 632-637(2024).

    [19] A. Forbes, A. Dudley, M. McLaren. Creation and detection of optical modes with spatial light modulators. Adv. Opt. Photon., 8, 200-227(2016).

    [20] L. Li, Y. Guo, Z. Zhang. Photon total angular momentum manipulation. Adv. Photon., 5, 056002(2023).

    [21] K. Huang, H. Liu, S. Restuccia. Spiniform phase-encoded metagratings entangling arbitrary rational-order orbital angular momentum. Light Sci. Appl., 7, 17156(2018).

    [22] X. Fang, H. Yang, W. Yao. High-dimensional orbital angular momentum multiplexing nonlinear holography. Adv. Photon., 3, 015001(2021).

    [23] Y. Zhang, M. Agnew, T. Roger. Simultaneous entanglement swapping of multiple orbital angular momentum states of light. Nat. Commun., 8, 632(2017).

    [24] X. Qiu, H. Guo, L. Chen. Remote transport of high-dimensional orbital angular momentum states and ghost images via spatial-mode-engineered frequency conversion. Nat. Commun., 14, 8244(2023).

    [25] Y. Shen, X. Yang, D. Naidoo. Structured ray-wave vector vortex beams in multiple degrees of freedom from a laser. Optica, 7, 820-831(2020).

    [26] Y. Shen, X. Wang, Z. Xie. Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities. Light Sci. Appl., 8, 90(2019).

    [27] J. Lin, X.-C. Yuan, S. H. Tao. Collinear superposition of multiple helical beams generated by a single azimuthally modulated phase-only element. Opt. Lett., 30, 3266-3268(2005).

    [28] Y. Ren, M. Agnew, T. Roger. Genetic-algorithm-based deep neural networks for highly efficient photonic device design. Photon. Res., 9, B247-B252(2021).

    [29] L. Zhu, J. Wang. Simultaneous generation of multiple orbital angular momentum (OAM) modes using a single phase-only element. Opt. Express, 23, 26221-26233(2015).

    [30] S. Li, J. Wang. Adaptive power-controllable orbital angular momentum (OAM) multicasting. Sci. Rep., 5, 9677(2015).

    [31] Y. Rivenson, Y. Zhang, H. Günaydın. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl., 7, 17141(2018).

    [32] W. Meng, D. Pi, B. Li. Engineered 3D vortex array with customized energy and topological charges for multi-view display. Laser Photon. Rev., 18, 2301258(2024).

    [33] W. Meng, B. Li, H. Luan. Orbital angular momentum neural communications for 1-to-40 multicasting with 16-ary shift keying. ACS Photon., 10, 2799-2807(2023).

    [34] T. Yu, S. Zhang, W. Chen. Phase dual-resolution networks for a computer-generated hologram. Opt. Express, 30, 2378-2389(2022).

    [35] S. Zhou, T. Xu, S. Dong. RDFNet: regional dynamic FISTA-net for spectral snapshot compressive imaging. IEEE Trans. Comput. Imaging, 9, 490-501(2023).

    [36] K. Wang, L. Song, C. Wang. On the use of deep learning for phase recovery. Light Sci. Appl., 13, 4(2024).

    [37] S. Zhou, L. Li, C. Gao. Deep-learning assisted fast orbital angular momentum complex spectrum analysis. Opt. Lett., 49, 173-176(2024).

    [38] N. Navab, O. Ronneberger, P. Fischer, J. Hornegger, T. Brox, W. M. Wells, A. F. Frangi. U-Net: convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), 234-241(2015).

    [39] S.-H. Gao, M. M. Cheng, K. Zhao. Res2Net: a new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell., 43, 652-662(2021).

    [40] S. Fu, M. M. Cheng, K. Zhao. Universal orbital angular momentum spectrum analyzer for beams. PhotoniX, 1, 19(2020).

    [41] S. Fu, S. Zhang, T. Wang. Pre-turbulence compensation of orbital angular momentum beams based on a probe and the Gerchberg–Saxton algorithm. Opt. Lett., 41, 3185-3188(2016).

    [42] M. Mirhosseini, M. Malik, Z. Shi. Efficient separation of the orbital angular momentum eigenstates of light. Nat. Commun., 4, 2781(2013).

    [43] H.-L. Zhou, D. Z. Fu, J. J. Dong. Orbital angular momentum complex spectrum analyzer for vortex light based on the rotational Doppler effect. Light Sci. Appl., 6, e16251(2017).

    Shiyun Zhou, Lang Li, Yishu Wang, Liliang Gao, Zhichao Zhang, Chunqing Gao, Shiyao Fu, "Intelligent tailoring of a broadband orbital angular momentum comb towards efficient optical convolution," Photonics Res. 13, 1148 (2025)
    Download Citation