• Laser & Optoelectronics Progress
  • Vol. 59, Issue 18, 1810013 (2022)
Yuan Deng, Yiping Shi*, Jie Liu, Yueying Jiang, Yamei Zhu, and Jin Liu
Author Affiliations
  • School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
  • show less
    DOI: 10.3788/LOP202259.1810013 Cite this Article Set citation alerts
    Yuan Deng, Yiping Shi, Jie Liu, Yueying Jiang, Yamei Zhu, Jin Liu. Multi-Angle Facial Expression Recognition Algorithm Combined with Dual-Channel WGAN-GP[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1810013 Copy Citation Text show less
    Overall structure of proposed model
    Fig. 1. Overall structure of proposed model
    Dual-channel generator network structure
    Fig. 2. Dual-channel generator network structure
    Discriminator network structure
    Fig. 3. Discriminator network structure
    Generation effect of frontal expression images on KDEF dataset
    Fig. 4. Generation effect of frontal expression images on KDEF dataset
    Generation effect of frontal expression images on Multi-PIE dataset
    Fig. 5. Generation effect of frontal expression images on Multi-PIE dataset
    Confusion matrix of expression recognition under different angles on KDEF dataset. (a) 0° rotation; (b) 45° rotation; (c) 90° rotation
    Fig. 6. Confusion matrix of expression recognition under different angles on KDEF dataset. (a) 0° rotation; (b) 45° rotation; (c) 90° rotation
    Partial error samples on KDEF dataset
    Fig. 7. Partial error samples on KDEF dataset
    Confusion matrix of expression recognition under different angles on Multi-PIE dataset. (a) 0° rotation; (b) 30° rotation; (c) 60° rotation; (d) 90° rotation
    Fig. 8. Confusion matrix of expression recognition under different angles on Multi-PIE dataset. (a) 0° rotation; (b) 30° rotation; (c) 60° rotation; (d) 90° rotation
    Partial error samples on Multi-PIE dataset
    Fig. 9. Partial error samples on Multi-PIE dataset
    Comparison of network loss function changes. (a) TP-GAN discriminant loss; (b) TP-GAN adversarial loss; (c) proposed model discriminant loss; (d) proposed model adversarial loss
    Fig. 10. Comparison of network loss function changes. (a) TP-GAN discriminant loss; (b) TP-GAN adversarial loss; (c) proposed model discriminant loss; (d) proposed model adversarial loss
    Comparison of facial expression images generation effect after model ablation. (a) Profile; (b) without local path; (c) without Wasserstein; (d) without GP; (e) proposed method; (f) real frontal
    Fig. 11. Comparison of facial expression images generation effect after model ablation. (a) Profile; (b) without local path; (c) without Wasserstein; (d) without GP; (e) proposed method; (f) real frontal
    Comparison of facial expression recognition rate under different ablation results
    Fig. 12. Comparison of facial expression recognition rate under different ablation results
    Comparison of frontalization generation effect. (a) Real profile; (b) proposed method; (c) TP-GAN[11]; (d) CAPG-GAN[9]; (e) FI-GAN[23]; (f) MTDNN[24]; (g) 3DMM[25]; (h) real frontal
    Fig. 13. Comparison of frontalization generation effect. (a) Real profile; (b) proposed method; (c) TP-GAN[11]; (d) CAPG-GAN[9]; (e) FI-GAN[23]; (f) MTDNN[24]; (g) 3DMM[25]; (h) real frontal
    InputOperatorChannelSEStrideRepeat
    1282×3Conv 3×3162
    642×16Bottleneck 3×3162
    322×16Bottleneck 3×3242
    162×24Bottleneck 5×5402
    82×40Bottleneck 5×5401×2
    82×40Bottleneck 5×5481×2
    82×48Bottleneck 5×5962
    42×96Bottleneck 3×3961×2
    42×96Conv 1×12561
    42×256Pooling 4×41
    12×256Conv 1×15121
    12×512Conv 1×171
    Table 1. Structure of improved MobileNetV3
    Angle /°-90-4504590
    Accuracy /%83.8887.7693.2788.3782.65
    Table 2. Multi-angle facial expression recognition rate on KDEF dataset
    Angle /°-90-60-300306090
    Accuracy /%81.8386.0088.6792.1789.1785.5081.33
    Table 3. Multi-angle facial expression recognition rate on Multi-PIE dataset
    Method30°60°90°
    Without Lsymk3=0)87.0081.8376.17
    Without Lipk4=0)82.6773.3362.50
    Proposed method89.1785.5081.33
    Table 4. Expression recognition rate under different loss functions on Multi-PIE dataset
    MethodAccuracy /%Parameter computation /MB
    VGG-192082.1570.45
    ResNet-502184.9423.60
    Xception2290.5520.88
    MobileNetV11490.243.24
    Proposed method92.170.95
    Table 5. Recognition results of frontal expressions under different models
    Method-90˚-45˚45˚90˚
    Reference[574.5072.0070.5077.4079.50
    Reference[680.3984.0777.03
    Reference[786.6783.8176.67
    Proposed method83.8887.7693.2788.3782.65
    Table 6. Comparison of multi-angle facial expression recognition rate with existing methods on KDEF dataset
    Method-90˚-60˚-30˚30˚60˚90˚
    Reference[2693.1092.8092.20
    Reference[2792.5889.6594.51
    Reference[2879.0083.0080.0086.0083.5082.0076.00
    Reference[2988.2087.3083.8078.80
    Reference[3083.3085.8081.6092.50
    Proposed method81.8386.0088.6792.1789.1785.5081.33
    Table 7. Comparison of multi-angle facial expression recognition rate with existing methods on Multi-PIE dataset
    Yuan Deng, Yiping Shi, Jie Liu, Yueying Jiang, Yamei Zhu, Jin Liu. Multi-Angle Facial Expression Recognition Algorithm Combined with Dual-Channel WGAN-GP[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1810013
    Download Citation