Author Affiliations
1Nankai University, Institute of Modern Optics, Tianjin, China2University of Arizona, Department of Electrical and Computer Engineering, Tucson, Arizona, United States3University of Southern California, Department of Electrical Engineering, Los Angeles, California, United States4University of Louisiana at Lafayette, Department of Electrical and Computer Engineering, Lafayette, Louisiana, United States5Xi’an Jiaotong University, School of Information and Communications Engineering, Xi’an, Chinashow less
Fig. 1. Conceptual diagram of multiparameter performance monitoring of PAM signals in intra- and inter-data center systems. DAC, digital-to-analog converter; IM, intensity modulator; PD, photodiode; ADC, analog-to-digital converter; TBPF, tunable bandpass filter; SDN, software-defined networking; ROF, roll-off factor; OSNR, optical signal-to-noise ratio; CD, chromatic dispersion.
Fig. 2. (a) Experimental setup used to collect eye diagrams. ASE, amplified spontaneous emission; TBPF, tunable bandpass filter; EDFA, erbium-doped fiber amplifier; VOA, variable optical attenuator; DSP, digital signal processing; DAC, digital-to-analog converter; TDCM, tunable dispersion compensation module; PD, photodiode; OSA, optical spectrum analyzer; DCA, digital communication analyzer. (b) The structure of the VGG-based CNN model for classification. Conv, convolutional; BN, batch normalization; MP, max pooling; FC, fully connected.
Fig. 3. Eye diagrams of PAM signals with different MFs, BRs, PS, ROFs, OSNR, and CD.
Fig. 4. Features and parameters used in traditional ML methods (KNN, SVM, DT, and GBDT).
Fig. 5. Typical algorithm architectures applied in the VGG-based model, ResNet-18, MobileNetV3, and EfficientNetV2. PW, point-wise; DW, depth-wise.
Fig. 6. Confusion matrices of DT and GBDT for OSNR, CD, ROF, and BR classification tasks.
Fig. 7. Accuracy of joint monitoring parameters with different ML methods for (a) digital signal parameters and (b) optical link parameters. (c) Accuracy for all the 432 classes for each MF with different five-parameter combinations.
Fig. 8. Accuracy for all 1728 classes with different six-parameter combinations using DT, GBDT, KNN, SVM, and VGG-based CNN.
Fig. 9. (a) Accuracy curves and (b) distributions of VGG-based model, ResNet-18, MobileNetV3-S, and EfficientNetV2-S.
Fig. 10. Structure of MTL model combined with MobileNetV3-Small.
Fig. 11. Accuracy of different monitoring tasks using MTL and VGG-based CNN.
Method | Input image size | HOG | Color histogram | Feature length | Orientation | Pixels per cell | Cells per block | Bin | Range | Channel | KNN, SVM | 320 × 320 × 3 | 9 | 16 × 16 | 2 × 2 | 256 | 0 to 255 | 3 (RGB) | 13,764 | DT, GBDT | 320 × 320 × 3 | 9 | 64 × 64 | 2 × 2 | 256 | 0 to 255 | 3 (RGB) | 1344 |
|
Table 1. Parameters used in hog and color histograms.
Input size | Filter size | Layer | Output size | 320 × 320 × 3 | 3 × 3 × 3 × 30 | Conv.1 | 320 × 320 × 30 | 320 × 320 × 30 | 2 × 2 × 30 | MP.1 | 160 × 160 × 30 | 160 × 160 × 30 | 3 × 3 × 30 × 60 | Conv.2 | 160 × 160 × 60 | 160 × 160 × 60 | 2 × 2 × 30 | MP.2 | 80 × 80 × 60 | 80 × 80 × 60 | 3 × 3 × 60 × 80 | Conv.3 | 80 × 80 × 80 | 80 × 80 × 80 | 2 × 2 × 80 | MP.3 | 40 × 40 × 80 | 40 × 40 × 80 | 3 × 3 × 80 × 120 | Conv.4 | 40 × 40 × 120 | 40 × 40 × 120 | 2 × 2 × 120 | MP.4 | 20 × 20 × 120 | 48,000 | 48,000 × 4096 | FC.1 | 4096 | 4096 | 4096 × 4096 | FC.2 | 4096 | 4096 | | FC.3 | |
|
Table 2. Structure of the VGG-based model.
Method | OSNR (%) | CD (%) | ROF (%) | BR (%) | PS (%) | MF (%) | KNN | 99.44 | 99.92 | 95.58 | 95.41 | 99.96 | 100 | DT | 92.03 | 92.48 | 78.88 | 61.71 | 99.54 | 95.81 | SVM | 99.32 | 99.78 | 97.02 | 97.31 | 99.92 | 100 | GBDT | 99.61 | 98.82 | 95.41 | 88.37 | 99.98 | 99.81 | VGG-based CNN | 99.01 | 100 | 98.7 | 99.21 | 100 | 100 |
|
Table 3. Accuracy of single-parameter classifications of different ML methods.
Method | PAM3 | PAM4 | PAM6 | PAM8 | All classes | KNN | | | | | | DT | MD = 400 | MD = 400 | MD = 400 | MD = 400 | MD = 400 | MF = 300 | MF = 400 | MF = 300 | MF = 900 | MF = 600 | SVM | | | | | | GBDT | LR = 0.1 | LR = 0.1 | LR = 0.1 | LR = 0.1 | LR = 0.1 | MD = 6 | MD = 6 | MD = 6 | MD = 7 | MD = 7 | iter = 350 | iter = 400 | iter = 370 | iter = 420 | iter = 500 |
|
Table 4. Parameters selected of traditional ML methods.
Method | OSNR (%) | CD (%) | ROF (%) | BR (%) | PS (%) | MF (%) | OSNR and CD (%) | ROF and PS and BR (%) | All classes (%) | CNN | 99.01 | 100 | 98.7 | 99.21 | 100 | 100 | 99.32 | 99.12 | 97.61 | CNN + GBDT | 99.16 | 99.61 | 98.81 | 98.1 | 100 | 100 | 99.53 | 98.11 | 83.13 | GBDT | 99.61 | 98.82 | 95.41 | 88.37 | 99.98 | 99.81 | 99.03 | 91.47 | 56.69 |
|
Table 5. Accuracy of classifications of GBDT, VGG-based CNN, and VGG-based CNN + GBDT.
Model name | Input size | FLOP | Parameter | Memory | MobileNetV3-S | 224×224×3 | 64.36 M | 3.28 M | 18.44 MB | VGG-based | 224×224×3 | 573.13 M | 120 M | 15.00 MB | Resnet-18 | 224×224×3 | 1.82 G | 12.06 M | 28.53 MB | EfficientNetV2-S | 224×224×3 | 2.9 G | 23.41 M | 139.00 MB |
|
Table 6. Computational cost per image of the modern CNN models.
Input | Operator | Out channel | SE | NL | Stride | 320×320×3 | Conv 3×3 | 16 | — | HS | 2 | 160×160×16 | MBConv 3×3 | 16 | Yes | RE | 2 | 80×80×16 | MBConv 3×3 | 24 | — | RE | 2 | 40×40×24 | MBConv 3×3 | 24 | — | RE | 1 | 40×40×24 | MBConv 5×5 | 40 | Yes | HS | 2 | 20×20×40 | MBConv 5×5 | 40 | Yes | HS | 1 | 20×20×40 | MBConv 5×5 | 40 | Yes | HS | 1 | 20×20×40 | MBConv 5×5 | 48 | Yes | HS | 1 | 20×20×48 | MBConv 5×5 | 48 | Yes | HS | 1 | 20×20×48 | MBConv 5×5 | 96 | Yes | HS | 2 | 10×10×96 | MBConv 5×5 | 96 | Yes | HS | 1 | 10×10×96 | MBConv 5×5 | 96 | Yes | HS | 1 | 10×10×96 | Conv 1×1 | 576 | Yes | HS | 1 | 10×10×576 | Pooling 7×7 | 576 | — | — | 1 | 1×1×576 | Conv 1×1, NBN | 1280 | — | HS | 1 |
|
Table 7. Structure of MobileNetV3-Small.
Task | BR | MF | ROF | PS | OSNR | CD | Weight | 1.01 | 0.81 | 0.99 | 0.78 | 0.98 | 0.79 |
|
Table 8. Weight of the tasks in the loss function.