• Acta Optica Sinica
  • Vol. 38, Issue 7, 0710004 (2018)
Zhe An*, Xiping Xu, Jinhua Yang, Yang Qiao, and Yang Liu
Author Affiliations
  • School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun, Jilin 130022, China
  • show less
    DOI: 10.3788/AOS201838.0710004 Cite this Article Set citation alerts
    Zhe An, Xiping Xu, Jinhua Yang, Yang Qiao, Yang Liu. Design of Augmented Reality Head-up Display System Based on Image Semantic Segmentation[J]. Acta Optica Sinica, 2018, 38(7): 0710004 Copy Citation Text show less

    Abstract

    In order to improve the security of drivers, an augmented reality head-up display (AR-HUD) system is designed based on image semantic segmentation. Firstly, we propose an improved single shot multibox detector network for semantic segmentation of road scene images. The front end of the network uses VGG-16 to extract the image features, and the back ends of the network are sampled on the feature maps. Thus, the feature map is segmented. Through the training of the network, the pixel level classification results of the scene objects are obtained, namely, the semantic content information of the environment. Then, with analysis of the relationship among real scene, optical display system, and drivers, the virtual information generated by computer is added to the real scene. In this way, the content is registered into the driver's view to improve the safety of driving. Experimental results show that the accuracy of the semantic segmentation algorithm can reach 77.8%, and image processing time of the algorithm for each frame is 45 ms, in other words, about 22 frame·s -1.
    Zhe An, Xiping Xu, Jinhua Yang, Yang Qiao, Yang Liu. Design of Augmented Reality Head-up Display System Based on Image Semantic Segmentation[J]. Acta Optica Sinica, 2018, 38(7): 0710004
    Download Citation