• Laser & Optoelectronics Progress
  • Vol. 61, Issue 10, 1011007 (2024)
Zixiong Peng1, Zhenping Xia1、2、*, Yueyuan Zhang1, Chaochao Li2, and Yuanshen Zhang1
Author Affiliations
  • 1College of Electronics and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, Jiangsu, China
  • 2College of Physical Science and Technology, Suzhou University of Science and Technology, Suzhou 215009, Jiangsu, China
  • show less
    DOI: 10.3788/LOP232351 Cite this Article Set citation alerts
    Zixiong Peng, Zhenping Xia, Yueyuan Zhang, Chaochao Li, Yuanshen Zhang. Quantitative Model for Dynamic Spatial Distortion in Virtual Environments[J]. Laser & Optoelectronics Progress, 2024, 61(10): 1011007 Copy Citation Text show less

    Abstract

    Three-dimensional (3D) imaging technology is widely used in augmented, virtual, and mixed realities. Dynamic virtual spatial distortion is an important factor that affects visual comfort. This study analyzes the processes involved in 3D image acquisition, display, and human eye perception to quantify the spatial distortion of virtual space in 3D imaging accurately. This study also simulates different spatial distortions that may occur in the process. The point cloud data of the object in the virtual space before and after distortion are compared and analyzed by first dividing and then aggregating. The quantitative model of static geometric distortion is thus established. The dynamic geometric distortion quantification model is obtained by combining the static model and the object motion attributes. The effectiveness of the proposed method is verified by simulating 10 different degrees of geometric distortion based on six groups of point clouds and comparing the subjective and objective consistencies between the proposed and classical method through subjective evaluation experiments. The results demonstrate that the proposed method has the best index performance in quantifying the geometric distortion of virtual space, and the Pearson's linear correlation coefficient obtained is 0.93, which accurately reflects the geometric distortion perceived by the test subjects. The research will provide a theoretical reference for the research in geometric distortion optimization and visual comfort improvement of 3D displays.
    Zixiong Peng, Zhenping Xia, Yueyuan Zhang, Chaochao Li, Yuanshen Zhang. Quantitative Model for Dynamic Spatial Distortion in Virtual Environments[J]. Laser & Optoelectronics Progress, 2024, 61(10): 1011007
    Download Citation