• Opto-Electronic Engineering
  • Vol. 43, Issue 12, 154 (2016)
WANG Xiaohua1、*, HOU Dengyong1, HU Min1, REN Fuji1、2, and WANG Jiayong1
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: 10.3969/j.issn.1003-501x.2016.12.024 Cite this Article
    WANG Xiaohua, HOU Dengyong, HU Min, REN Fuji, WANG Jiayong. Dual-modality Emotion Recognition Model Based on Temporal-spatial LBP Moment and Dempster-Shafer Evidence Fusion[J]. Opto-Electronic Engineering, 2016, 43(12): 154 Copy Citation Text show less

    Abstract

    To overcome the deficiency of high complexity performance in video emotion recognition, we propose a novel Local Binary Pattern Moment method based on Temporal-Spatial for feature extraction of dual-modality emotion recognition. Firstly, preprocessing is used to obtain the facial expression and posture sequences. Secondly, TSLBPM is utilized to extract the features of the facial expression and posture sequences. The minimum Euclidean distances are selected by calculating the features of the testing sequences and the marked emotion training sets, and they are used as independent evidence to build the Basic Probability Assignment (BPA). Finally, according to the rules of Dempster-Shafer evidence theory, the expression recognition result is obtained by fused BPA. The experimental results on the FABO expression and posture dual-modality emotion database show the Temporal-Spatial Local Binary Pattern Moment feature of the video image can be extracted quickly and the video emotional state can be effectively identified. What’s more, compared with other methods , the experiments have verified the superiority of fusion.
    WANG Xiaohua, HOU Dengyong, HU Min, REN Fuji, WANG Jiayong. Dual-modality Emotion Recognition Model Based on Temporal-spatial LBP Moment and Dempster-Shafer Evidence Fusion[J]. Opto-Electronic Engineering, 2016, 43(12): 154
    Download Citation