• Opto-Electronic Advances
  • Vol. 6, Issue 5, 220135 (2023)
Kexuan Liu, Jiachen Wu, Zehao He, and Liangcai Cao*
Author Affiliations
  • State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
  • show less
    DOI: 10.29026/oea.2023.220135 Cite this Article
    Kexuan Liu, Jiachen Wu, Zehao He, Liangcai Cao. 4K-DMDNet: diffraction model-driven network for 4K computer-generated holography[J]. Opto-Electronic Advances, 2023, 6(5): 220135 Copy Citation Text show less
    References

    [14] He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016);http://doi.org/10.1109/CVPR.2016.90.

    [42] Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention 234–241 (Springer, 2015);http://doi.org/10.1007/978-3-319-24574-4_28.

    [43] Shi WZ, Caballero J, Huszár F, Totz J, Aitken AP et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition 1874–1883 (IEEE, 2016); http://doi.org/10.1109/CVPR.2016.207.

    [44] Dumoulin V, Shlens J, Kudlur M. A learned representation for artistic style. In Proceedings of the 5th International Conference on Learning Representations (IEEE, 2016). https://arxiv.org/abs/1610.07629

    [46] Kingma DP, Ba J. Adam: a method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (2014). https://arxiv.org/abs/1412.6980

    [47] Source code: https://github.com/THUHoloLab/4K-DMDNet

    Kexuan Liu, Jiachen Wu, Zehao He, Liangcai Cao. 4K-DMDNet: diffraction model-driven network for 4K computer-generated holography[J]. Opto-Electronic Advances, 2023, 6(5): 220135
    Download Citation