• Photonics Research
  • Vol. 9, Issue 4, B128 (2021)
Albert Ryou1、*, James Whitehead1, Maksym Zhelyeznyakov1, Paul Anderson2、3, Cem Keskin4, Michal Bajcsy3、5, and Arka Majumdar1、6
Author Affiliations
  • 1Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington 98195, USA
  • 2Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario ON N2L 3G1, Canada
  • 3Institute of Quantum Computing, University of Waterloo, Waterloo, Ontario ON N2L 3G1, Canada
  • 4Google, Mountain View, California 94043, USA
  • 5Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario ON N2L 3G1, Canada
  • 6Department of Physics, University of Washington, Seattle, Washington 98195, USA
  • show less
    DOI: 10.1364/PRJ.415964 Cite this Article Set citation alerts
    Albert Ryou, James Whitehead, Maksym Zhelyeznyakov, Paul Anderson, Cem Keskin, Michal Bajcsy, Arka Majumdar. Free-space optical neural network based on thermal atomic nonlinearity[J]. Photonics Research, 2021, 9(4): B128 Copy Citation Text show less
    Trained optical neural network (ONN). (a) The detector layer determines the location, where the light from the individual digits should be focused. The layout of the layer is a hyperparameter in our training. Here, each label corresponds to one bright circle (radius=100 μm) located 1 mm from the center of the image. The “0” label is on the positive x axis, and the rest of the labels are located sequentially counterclockwise on a circle. (b) Trained phase mask; (c) sample input image; (d) output of the neural network for the sample input shown in (c). For training, the neural network calculates the intensity at each label location and returns the highest-intensity label as its prediction. All images have dimensions of 600×600 pixels, which correspond to 4.8×4.8 mm.
    Fig. 1. Trained optical neural network (ONN). (a) The detector layer determines the location, where the light from the individual digits should be focused. The layout of the layer is a hyperparameter in our training. Here, each label corresponds to one bright circle (radius=100  μm) located 1 mm from the center of the image. The “0” label is on the positive x axis, and the rest of the labels are located sequentially counterclockwise on a circle. (b) Trained phase mask; (c) sample input image; (d) output of the neural network for the sample input shown in (c). For training, the neural network calculates the intensity at each label location and returns the highest-intensity label as its prediction. All images have dimensions of 600×600 pixels, which correspond to 4.8×4.8  mm.
    Accuracy versus epoch for the linear model (blue dot) and the nonlinear model (red cross).
    Fig. 2. Accuracy versus epoch for the linear model (blue dot) and the nonlinear model (red cross).
    Experimental setup. (a) Cartoon layout of the setup. The focal lengths of the lenses are: L1,50 mm; L2,300 mm; L3,150 mm; L4,150 mm; L5,100 mm. M indicates a flat mirror. (b) Photograph of the experiment.
    Fig. 3. Experimental setup. (a) Cartoon layout of the setup. The focal lengths of the lenses are: L1,50  mm; L2,300  mm; L3,150  mm; L4,150  mm; L5,100  mm. M indicates a flat mirror. (b) Photograph of the experiment.
    Nonlinear function showing the input–output curve for the incident intensity. The x axis is proportional to the input power, or the average pixel valve on the CCD camera without the vapor cell. The y axis is proportional to the output power, or the average pixel value on the CCD camera with the vapor cell in place. Inset, zoom-in plot showing the curve fit.
    Fig. 4. Nonlinear function showing the input–output curve for the incident intensity. The x axis is proportional to the input power, or the average pixel valve on the CCD camera without the vapor cell. The y axis is proportional to the output power, or the average pixel value on the CCD camera with the vapor cell in place. Inset, zoom-in plot showing the curve fit.
    Linear NetworkNonlinear Network
    Simulation with ideal parameters74.284.2
    Simulation with experimental parameters66.466.6
    Experiment without phase mask14.714.2
    Experiment with phase mask26.733.0
    Table 1. Summary of ONN Accuracy in Percentage
    Albert Ryou, James Whitehead, Maksym Zhelyeznyakov, Paul Anderson, Cem Keskin, Michal Bajcsy, Arka Majumdar. Free-space optical neural network based on thermal atomic nonlinearity[J]. Photonics Research, 2021, 9(4): B128
    Download Citation