• Photonics Research
  • Vol. 9, Issue 7, B253 (2021)
Baurzhan Muminov, Altai Perry, Rakib Hyder, M. Salman Asif, and Luat T. Vuong*
Author Affiliations
  • University of California Riverside, Riverside, California 92521, USA
  • show less
    DOI: 10.1364/PRJ.416614 Cite this Article Set citation alerts
    Baurzhan Muminov, Altai Perry, Rakib Hyder, M. Salman Asif, Luat T. Vuong. Toward simple, generalizable neural networks with universal training for low-SWaP hybrid vision[J]. Photonics Research, 2021, 9(7): B253 Copy Citation Text show less

    Abstract

    Speed, generalizability, and robustness are fundamental issues for building lightweight computational cameras. Here we demonstrate generalizable image reconstruction with the simplest of hybrid machine vision systems: linear optical preprocessors combined with no-hidden-layer, “small-brain” neural networks. Surprisingly, such simple neural networks are capable of learning the image reconstruction from a range of coded diffraction patterns using two masks. We investigate the possibility of generalized or “universal training” with these small brains. Neural networks trained with sinusoidal or random patterns uniformly distribute errors around a reconstructed image, whereas models trained with a combination of sharp and curved shapes (the phase pattern of optical vortices) reconstruct edges more boldly. We illustrate variable convergence of these simple neural networks and relate learnability of an image to its singular value decomposition entropy of the image. We also provide heuristic experimental results. With thresholding, we achieve robust reconstruction of various disjoint datasets. Our work is favorable for future real-time low size, weight, and power hybrid vision: we reconstruct images on a 15 W laptop CPU with 15,000 frames per second: faster by a factor of 3 than previously reported results and 3 orders of magnitude faster than convolutional neural networks.

    F(u,v)=F[M(x,y)F(x,y)].

    View in Article

    F(x,y)M(x,y)=eiαXG(x,y)M(x,y),

    View in Article

    Y=H(X)+N,

    View in Article

    Y=|F[eiαXG(x,y)M(x,y)]|2+N,

    View in Article

    M(x,y)=e(x2+y2)(ifλ+1w2)eimϕ,

    View in Article

    XF(sj,sk,ϕl,n)(x,y)=[ei(xsj+ysk+ϕl)]Gn,

    View in Article

    Gn(x,y)=e[(xxn)2+(yyn)2]/wn2,

    View in Article

    XV(xj,yk,ϕj,n,l)(x,y)={eimlarctan[(yyk)/(xxj)]+ϕk}Gj,k,n.

    View in Article

    Gj,k,n(x,y)=e[(xxj)2+(yyk)2]/wn2.

    View in Article

    ESVD=1Kσ¯ilog2(σ¯i),

    View in Article

    σ¯i=σi1Kσiandiσ¯i=1,

    View in Article

    Baurzhan Muminov, Altai Perry, Rakib Hyder, M. Salman Asif, Luat T. Vuong. Toward simple, generalizable neural networks with universal training for low-SWaP hybrid vision[J]. Photonics Research, 2021, 9(7): B253
    Download Citation