• Opto-Electronic Engineering
  • Vol. 45, Issue 6, 170742 (2018)
Chen Tianshi*, Tie Yun, Qi Lin, and Chen Enqing
Author Affiliations
  • [in Chinese]
  • show less
    DOI: 10.12086/oee.2018.170742 Cite this Article
    Chen Tianshi, Tie Yun, Qi Lin, Chen Enqing. An improved method to render the sound of VR scene[J]. Opto-Electronic Engineering, 2018, 45(6): 170742 Copy Citation Text show less

    Abstract

    Based on the virtual scene containing hundreds of movable sound sources, due to the high computational cost of clustering stage, the traditional spatial sound rendering schemes often take up too much computing resources, which have become a bottleneck in the development of VR audio rendering technology. In this paper, we use fractional Fourier transform (FRFT) as a tool in sound sampling to reduce the quantization noise during the ADC conversion stage. Moreover, we improve the processing speed of sound rendering and the operation efficiency of the entire system by adding the average angle deviation threshold in the clustering step. In addition, we design and implement a perceptual user experiment, and validates the notion that people are more susceptible to spatial errors in different types of sound sources, especially if it is visible. Based on this conclusion, this paper proposes an improved method of sound clustering, which reduces the possibility of clustering different types of sound sources.
    Chen Tianshi, Tie Yun, Qi Lin, Chen Enqing. An improved method to render the sound of VR scene[J]. Opto-Electronic Engineering, 2018, 45(6): 170742
    Download Citation