• Advanced Photonics Nexus
  • Vol. 3, Issue 2, 026009 (2024)
Si-Ao Li1, Yuanpeng Liu1, Yiwen Zhang1, Wenqian Zhao1, Tongying Shi1, Xiao Han2, Ivan B. Djordjevic2, Changjing Bao3, Zhongqi Pan4, and Yang Yue5、*
Author Affiliations
  • 1Nankai University, Institute of Modern Optics, Tianjin, China
  • 2University of Arizona, Department of Electrical and Computer Engineering, Tucson, Arizona, United States
  • 3University of Southern California, Department of Electrical Engineering, Los Angeles, California, United States
  • 4University of Louisiana at Lafayette, Department of Electrical and Computer Engineering, Lafayette, Louisiana, United States
  • 5Xi’an Jiaotong University, School of Information and Communications Engineering, Xi’an, China
  • show less
    DOI: 10.1117/1.APN.3.2.026009 Cite this Article Set citation alerts
    Si-Ao Li, Yuanpeng Liu, Yiwen Zhang, Wenqian Zhao, Tongying Shi, Xiao Han, Ivan B. Djordjevic, Changjing Bao, Zhongqi Pan, Yang Yue. Multiparameter performance monitoring of pulse amplitude modulation channels using convolutional neural networks[J]. Advanced Photonics Nexus, 2024, 3(2): 026009 Copy Citation Text show less
    Conceptual diagram of multiparameter performance monitoring of PAM signals in intra- and inter-data center systems. DAC, digital-to-analog converter; IM, intensity modulator; PD, photodiode; ADC, analog-to-digital converter; TBPF, tunable bandpass filter; SDN, software-defined networking; ROF, roll-off factor; OSNR, optical signal-to-noise ratio; CD, chromatic dispersion.
    Fig. 1. Conceptual diagram of multiparameter performance monitoring of PAM signals in intra- and inter-data center systems. DAC, digital-to-analog converter; IM, intensity modulator; PD, photodiode; ADC, analog-to-digital converter; TBPF, tunable bandpass filter; SDN, software-defined networking; ROF, roll-off factor; OSNR, optical signal-to-noise ratio; CD, chromatic dispersion.
    (a) Experimental setup used to collect eye diagrams. ASE, amplified spontaneous emission; TBPF, tunable bandpass filter; EDFA, erbium-doped fiber amplifier; VOA, variable optical attenuator; DSP, digital signal processing; DAC, digital-to-analog converter; TDCM, tunable dispersion compensation module; PD, photodiode; OSA, optical spectrum analyzer; DCA, digital communication analyzer. (b) The structure of the VGG-based CNN model for classification. Conv, convolutional; BN, batch normalization; MP, max pooling; FC, fully connected.
    Fig. 2. (a) Experimental setup used to collect eye diagrams. ASE, amplified spontaneous emission; TBPF, tunable bandpass filter; EDFA, erbium-doped fiber amplifier; VOA, variable optical attenuator; DSP, digital signal processing; DAC, digital-to-analog converter; TDCM, tunable dispersion compensation module; PD, photodiode; OSA, optical spectrum analyzer; DCA, digital communication analyzer. (b) The structure of the VGG-based CNN model for classification. Conv, convolutional; BN, batch normalization; MP, max pooling; FC, fully connected.
    Eye diagrams of PAM signals with different MFs, BRs, PS, ROFs, OSNR, and CD.
    Fig. 3. Eye diagrams of PAM signals with different MFs, BRs, PS, ROFs, OSNR, and CD.
    Features and parameters used in traditional ML methods (KNN, SVM, DT, and GBDT).
    Fig. 4. Features and parameters used in traditional ML methods (KNN, SVM, DT, and GBDT).
    Typical algorithm architectures applied in the VGG-based model, ResNet-18, MobileNetV3, and EfficientNetV2. PW, point-wise; DW, depth-wise.
    Fig. 5. Typical algorithm architectures applied in the VGG-based model, ResNet-18, MobileNetV3, and EfficientNetV2. PW, point-wise; DW, depth-wise.
    Confusion matrices of DT and GBDT for OSNR, CD, ROF, and BR classification tasks.
    Fig. 6. Confusion matrices of DT and GBDT for OSNR, CD, ROF, and BR classification tasks.
    Accuracy of joint monitoring parameters with different ML methods for (a) digital signal parameters and (b) optical link parameters. (c) Accuracy for all the 432 classes for each MF with different five-parameter combinations.
    Fig. 7. Accuracy of joint monitoring parameters with different ML methods for (a) digital signal parameters and (b) optical link parameters. (c) Accuracy for all the 432 classes for each MF with different five-parameter combinations.
    Accuracy for all 1728 classes with different six-parameter combinations using DT, GBDT, KNN, SVM, and VGG-based CNN.
    Fig. 8. Accuracy for all 1728 classes with different six-parameter combinations using DT, GBDT, KNN, SVM, and VGG-based CNN.
    (a) Accuracy curves and (b) distributions of VGG-based model, ResNet-18, MobileNetV3-S, and EfficientNetV2-S.
    Fig. 9. (a) Accuracy curves and (b) distributions of VGG-based model, ResNet-18, MobileNetV3-S, and EfficientNetV2-S.
    Structure of MTL model combined with MobileNetV3-Small.
    Fig. 10. Structure of MTL model combined with MobileNetV3-Small.
    Accuracy of different monitoring tasks using MTL and VGG-based CNN.
    Fig. 11. Accuracy of different monitoring tasks using MTL and VGG-based CNN.
    MethodInput image sizeHOGColor histogramFeature length
    OrientationPixels per cellCells per blockBinRangeChannel
    KNN, SVM320 × 320 × 3916 × 162 × 22560 to 2553 (RGB)13,764
    DT, GBDT320 × 320 × 3964 × 642 × 22560 to 2553 (RGB)1344
    Table 1. Parameters used in hog and color histograms.
    Input sizeFilter sizeLayerOutput size
    320 × 320 × 33 × 3 × 3 × 30Conv.1320 × 320 × 30
    320 × 320 × 302 × 2 × 30MP.1160 × 160 × 30
    160 × 160 × 303 × 3 × 30 × 60Conv.2160 × 160 × 60
    160 × 160 × 602 × 2 × 30MP.280 × 80 × 60
    80 × 80 × 603 × 3 × 60 × 80Conv.380 × 80 × 80
    80 × 80 × 802 × 2 × 80MP.340 × 40 × 80
    40 × 40 × 803 × 3 × 80 × 120Conv.440 × 40 × 120
    40 × 40 × 1202 × 2 × 120MP.420 × 20 × 120
    48,00048,000 × 4096FC.14096
    40964096 × 4096FC.24096
    40964096×NFC.3N
    Table 2. Structure of the VGG-based model.
    MethodOSNR (%)CD (%)ROF (%)BR (%)PS (%)MF (%)
    KNN99.4499.9295.5895.4199.96100
    DT92.0392.4878.8861.7199.5495.81
    SVM99.3299.7897.0297.3199.92100
    GBDT99.6198.8295.4188.3799.9899.81
    VGG-based CNN99.0110098.799.21100100
    Table 3. Accuracy of single-parameter classifications of different ML methods.
    MethodPAM3PAM4PAM6PAM8All classes
    KNNk=3k=3k=4k=3k=5
    DTMD = 400MD = 400MD = 400MD = 400MD = 400
    MF = 300MF = 400MF = 300MF = 900MF = 600
    SVMC=10C=8C=8C=10C=10
    GBDTLR = 0.1LR = 0.1LR = 0.1LR = 0.1LR = 0.1
    MD = 6MD = 6MD = 6MD = 7MD = 7
    iter = 350iter = 400iter = 370iter = 420iter = 500
    Table 4. Parameters selected of traditional ML methods.
    MethodOSNR (%)CD (%)ROF (%)BR (%)PS (%)MF (%)OSNR and CD (%)ROF and PS and BR (%)All classes (%)
    CNN99.0110098.799.2110010099.3299.1297.61
    CNN + GBDT99.1699.6198.8198.110010099.5398.1183.13
    GBDT99.6198.8295.4188.3799.9899.8199.0391.4756.69
    Table 5. Accuracy of classifications of GBDT, VGG-based CNN, and VGG-based CNN + GBDT.
    Model nameInput sizeFLOPParameterMemory
    MobileNetV3-S224×224×364.36 M3.28 M18.44 MB
    VGG-based224×224×3573.13 M120 M15.00 MB
    Resnet-18224×224×31.82 G12.06 M28.53 MB
    EfficientNetV2-S224×224×32.9 G23.41 M139.00 MB
    Table 6. Computational cost per image of the modern CNN models.
    InputOperatorOut channelSENLStride
    320×320×3Conv 3×316HS2
    160×160×16MBConv 3×316YesRE2
    80×80×16MBConv 3×324RE2
    40×40×24MBConv 3×324RE1
    40×40×24MBConv 5×540YesHS2
    20×20×40MBConv 5×540YesHS1
    20×20×40MBConv 5×540YesHS1
    20×20×40MBConv 5×548YesHS1
    20×20×48MBConv 5×548YesHS1
    20×20×48MBConv 5×596YesHS2
    10×10×96MBConv 5×596YesHS1
    10×10×96MBConv 5×596YesHS1
    10×10×96Conv 1×1576YesHS1
    10×10×576Pooling 7×75761
    1×1×576Conv 1×1, NBN1280HS1
    Table 7. Structure of MobileNetV3-Small.
    TaskBRMFROFPSOSNRCD
    Weight1.010.810.990.780.980.79
    Table 8. Weight of the tasks in the loss function.
    Si-Ao Li, Yuanpeng Liu, Yiwen Zhang, Wenqian Zhao, Tongying Shi, Xiao Han, Ivan B. Djordjevic, Changjing Bao, Zhongqi Pan, Yang Yue. Multiparameter performance monitoring of pulse amplitude modulation channels using convolutional neural networks[J]. Advanced Photonics Nexus, 2024, 3(2): 026009
    Download Citation