• Laser & Optoelectronics Progress
  • Vol. 58, Issue 13, 1306003 (2021)
Huijuan Wu1、*, Xinyu Liu1, and Yunjiang Rao1、2、**
Author Affiliations
  • 1Key Laboratory of Fiber Optic Sensing and Communication, Ministry of Education, University of Electronic Science and Technology of China, Chengdu , Sichuan 611731, China
  • 2Fiber Optic Sensing Research Center, Zhijiang Laboratory, Hangzhou , Zhejiang 310000, China
  • show less
    DOI: 10.3788/LOP202158.1306003 Cite this Article Set citation alerts
    Huijuan Wu, Xinyu Liu, Yunjiang Rao. Processing and Application of Fiber Optic Distributed Sensing Signal Based on Φ-OTDR[J]. Laser & Optoelectronics Progress, 2021, 58(13): 1306003 Copy Citation Text show less
    Principle of the DVS/DAS based on Φ-OTDR
    Fig. 1. Principle of the DVS/DAS based on Φ-OTDR
    Spatio-temporal structure of the Φ-OTDR signal[44]
    Fig. 2. Spatio-temporal structure of the Φ-OTDR signal[44]
    De-noising and anomaly detection results based on STFT. (a) Original differential trace; (b) local energy distribution along the trace; (c) local energy distribution after the background subtraction; (d) intrusion detection and location in the energy trace; (e) intrusion detection and location in the original differential trace[45]
    Fig. 3. De-noising and anomaly detection results based on STFT. (a) Original differential trace; (b) local energy distribution along the trace; (c) local energy distribution after the background subtraction; (d) intrusion detection and location in the energy trace; (e) intrusion detection and location in the original differential trace[45]
    Signal-noise separation method based on multi-scale wavelet decomposition[44]
    Fig. 4. Signal-noise separation method based on multi-scale wavelet decomposition[44]
    Signal-noise separation results based on multi-scale wavelet decomposition. (a) Original temporal signal; (b) combined component of a6 and d6; (c) combined component of d3 and d4; (d) combined component of d1 and d2[44]
    Fig. 5. Signal-noise separation results based on multi-scale wavelet decomposition. (a) Original temporal signal; (b) combined component of a6 and d6; (c) combined component of d3 and d4; (d) combined component of d1 and d2[44]
    Signal-noise separation results based on multi-scale wavelet decomposition. (a) Before the signal-noise separation; (b) after the signal-noise separation[45]
    Fig. 6. Signal-noise separation results based on multi-scale wavelet decomposition. (a) Before the signal-noise separation; (b) after the signal-noise separation[45]
    Mining and recognition processing flow of sequential information based on HMM[36]
    Fig. 7. Mining and recognition processing flow of sequential information based on HMM[36]
    State transition relationship between short-term SU features[36]
    Fig. 8. State transition relationship between short-term SU features[36]
    Common typical event signals. (a) Background noise; (b) manual digging signal; (c) machine excavation signal; (d) traffic interference; (e) forging plant noise; (f) fabricating plant noise[36]
    Fig. 9. Common typical event signals. (a) Background noise; (b) manual digging signal; (c) machine excavation signal; (d) traffic interference; (e) forging plant noise; (f) fabricating plant noise[36]
    Hidden state sequence mined by HMM. (a) Background noise; (b) manual digging signal; (c) machine excavation signal; (d) traffic interference; (e) forging plant noise; (f) fabricating plant noise[36]
    Fig. 10. Hidden state sequence mined by HMM. (a) Background noise; (b) manual digging signal; (c) machine excavation signal; (d) traffic interference; (e) forging plant noise; (f) fabricating plant noise[36]
    Training losses of different CNNs[33]
    Fig. 11. Training losses of different CNNs[33]
    Classification results of different CNN[33]
    Fig. 12. Classification results of different CNN[33]
    Classification results of 1D-CNN combined with different models[33]
    Fig. 13. Classification results of 1D-CNN combined with different models[33]
    Ten-fold cross classification results of 1D-CNN combined with different models[33]
    Fig. 14. Ten-fold cross classification results of 1D-CNN combined with different models[33]
    Spatio-temporal feature extraction process based on CNN-BiLSTM[42]
    Fig. 15. Spatio-temporal feature extraction process based on CNN-BiLSTM[42]
    Visualization results of different features. (a) Artificial features; (b) 2D-CNN features; (c) BiLSTM features; (d) CNN-BiLSTM features[42]
    Fig. 16. Visualization results of different features. (a) Artificial features; (b) 2D-CNN features; (c) BiLSTM features; (d) CNN-BiLSTM features[42]
    Ten-fold cross-validation results of different models[42]
    Fig. 17. Ten-fold cross-validation results of different models[42]
    Recognition time of single sample[42]
    Fig. 18. Recognition time of single sample[42]
    Spatial energy distribution characteristics with different vertical distances. (a) 6 m; (b) 14 m[46]
    Fig. 19. Spatial energy distribution characteristics with different vertical distances. (a) 6 m; (b) 14 m[46]
    Vertical distances estimation method based on spatial energy distribution and integrated learning model[46]
    Fig. 20. Vertical distances estimation method based on spatial energy distribution and integrated learning model[46]
    Test signal of the mechanical knocking. (a) Knocking scene; (b) time domain signal diagram[46]
    Fig. 21. Test signal of the mechanical knocking. (a) Knocking scene; (b) time domain signal diagram[46]
    Spatial energy attenuation curves of the machine knocking signals. (a) Group 1; (b) group 2[46]
    Fig. 22. Spatial energy attenuation curves of the machine knocking signals. (a) Group 1; (b) group 2[46]
    Test signal of the mechanical excavation. (a) Excavation scene; (b) time domain signal diagram[46]
    Fig. 23. Test signal of the mechanical excavation. (a) Excavation scene; (b) time domain signal diagram[46]
    Spatial energy attenuation curves of the excavation signals[46]
    Fig. 24. Spatial energy attenuation curves of the excavation signals[46]
    Principles of border control and security technology[7]
    Fig. 25. Principles of border control and security technology[7]
    Laying method of optical cable and the monitoring signal before and after noise removal. (a) Laying method of the optical cable; (b) monitoring signal before denoising; (c) monitoring signal after denoising[7]
    Fig. 26. Laying method of optical cable and the monitoring signal before and after noise removal. (a) Laying method of the optical cable; (b) monitoring signal before denoising; (c) monitoring signal after denoising[7]
    Monitoring site for excavation prevention of long-distance oil pipelines. (a) Monitoring equipment; (b) gas station; (c) on-site test environment[49]
    Fig. 27. Monitoring site for excavation prevention of long-distance oil pipelines. (a) Monitoring equipment; (b) gas station; (c) on-site test environment[49]
    Characteristic radar chart of typical event in an oil pipeline. (a) Background noise; (b) manual excavation; (c) mechanical excavation; (d) traffic disturbance; (e) factory interference
    Fig. 28. Characteristic radar chart of typical event in an oil pipeline. (a) Background noise; (b) manual excavation; (c) mechanical excavation; (d) traffic disturbance; (e) factory interference
    Principle of the pipeline optical cable anti-theft and operation and maintenance monitoring system
    Fig. 29. Principle of the pipeline optical cable anti-theft and operation and maintenance monitoring system
    Interface of online monitoring and inspection. (a) Online positioning and inspection based on Baidu map; (b) statistical results of optical cable information
    Fig. 30. Interface of online monitoring and inspection. (a) Online positioning and inspection based on Baidu map; (b) statistical results of optical cable information
    Project site of submarine cable safety monitoring. (a) Monitored marine area; (b) monitoring center; (c) monitoring setup
    Fig. 31. Project site of submarine cable safety monitoring. (a) Monitored marine area; (b) monitoring center; (c) monitoring setup
    Monitoring site and test equipment of overhead transmission cables. (a) Monitoring center; (b) monitoring setup[49]
    Fig. 32. Monitoring site and test equipment of overhead transmission cables. (a) Monitoring center; (b) monitoring setup[49]
    Frequency and space distribution of cable wind dance. (a) 1:00—2:00; (b) 14:00—15:00[50]
    Fig. 33. Frequency and space distribution of cable wind dance. (a) 1:00—2:00; (b) 14:00—15:00[50]
    Installation wiring diagram of outdoor optical cable. (a) Sectional view; (b) top view; (c) installation process[4]
    Fig. 34. Installation wiring diagram of outdoor optical cable. (a) Sectional view; (b) top view; (c) installation process[4]
    Leak response signal of the DVS/DAS system. (a) Response graph of leakage when the valve is not opened; (b) response graph of leakage when the valve is opened[4]
    Fig. 35. Leak response signal of the DVS/DAS system. (a) Response graph of leakage when the valve is not opened; (b) response graph of leakage when the valve is opened[4]
    Institutional unitFeature extraction dimensionRecognition network or modelAttention mechanismEnd-to-end networkRef.
    Beijing Jiaotong UniversitytemporalXGBoostnofalse

    31

    32

    temporalF-ELMnofalse
    University of Electronic Science and Technology of Chinatemporal1D-CNNnotrue33
    University of San Pablo Central European UniversitytemporalGMMsnofalse

    34

    35

    temporal

    contextual sequence

    GMMs+HMMnofalse
    University of Electronic Science and Technology of Chinatemporal structure and contextual sequenceHMMnofalse36
    Tianjin Universitymultiscale temporalMS-CNN+CPLnotrue37
    Anhui Universitymultiscale temporalMS-CNNnotrue38
    Transportation, Security, Energy & Automation Systems Business Sectortime-frequency2D-CNNnofalse28
    Beijing Institute of Technologytime-frequency2D-CNNnofalse29
    Zhejiang Universitytime-frequency2D-CNN+SVMnofalse30
    Shanghai Maritime Universitytime-frequencyPNNnofalse39
    University of Colognetime-frequencyALSTMyesfalse40
    Tianjin Universityspatial-temporal2D-CNNnofalse41
    University of Electronic Science and Technology of Chinaspatial-temporal1D-CNN+BiLSTMnotrue42
    Sichuan Universityspatial-temporal2D-CNN+ LSTMnofalse43
    Table 1. DVS/DAS signal detection and recognition method combined with machine learning model
    Different detection methodEnergy threshold detection methodModular maximum method of wavelet transformSTFT-based method
    PD/%76.7395.6598.76
    NAR(24 h)2871612
    Table 2. Actual detection results of different methods
    Feature typeFeature name
    Time domainmain impact strength、short time average magnitude、short time average energy
    Frequency domainfrequency band variance of PSD、frequency band information entropy of PSD、mean of amplitude of PSD、procrustes mean shape of PSD、amplitude standard deviation of PSD、shape standard deviation of PSD、amplitude of skewness of PSD、shape of skewness of PSD、amplitude of kurtosis of PSD、 shape of kurtosis of PSD
    Transformation domainwavelet packet energy spectrum、information entropy of wavelet packet、MFCC
    Model featureautoregression model coefficient、linear prediction model coefficient
    Table 3. Local structural features of the short-term SU
    ModelAverage recognition rateTypePrecisionRecallF-value
    HMM0.98211.00001.00001.0000
    21.00001.00001.0000
    31.00001.00001.0000
    40.95241.00000.9756
    51.00000.91300.9545
    SVM0.91911.00001.00001.0000
    20.75001.00000.8571
    31.00001.00001.0000
    40.89740.87500.8861
    51.00000.82610.9048
    RF0.92811.00000.95240.9756
    20.86670.86670.8667
    30.92311.00000.9600
    40.90000.90000.9000
    50.91300.91300.9130
    XGB0.93710.95241.00000.9756
    20.86671.00000.9286
    31.00001.00001.0000
    40.97500.86670.9176
    50.86960.95240.9091
    DT0.89211.00000.95240.9756
    20.81250.86670.8387
    31.00001.00001.0000
    40.86110.77500.8158
    50.74070.86960.8000
    BN0.78310.95241.00000.9756
    20.66670.45450.5405
    31.00000.92310.9600
    40.57500.79310.6667
    50.95650.81480.8800
    Table 4. Classification performances of different models
    Network1D-CNN2D-CNN (T-F matrix)2D-CNN RGB image
    C1(1×5+1)×32=192(5×5+1)×32=832(1+5×5)×32=832
    C2(1+5)×64=384(5×5+1)×64=1664(1+5×5)×64=1664
    C3(1+5)×96=576(5×5+1)×96=2496(1+5×5)×96=2496
    FC164×96×1000=6144002×16×96×1000=307200013×16×96×1000=19968000
    FC21000×1000=10000001000×1000=10000001000×1000=1000000
    Total number of parametersabout 16000about 40800about 20000
    Table 5. Parameters of CNN with different dimensions
    Distance /mError accuracy(±1 m)/%Error accuracy(±2 m)/%Threat levelAccuracy rate /%
    0100100100
    1100100
    2100100
    3100100
    4100100
    510010090.8
    6100100
    7100100
    87171
    987100
    1083100
    11100100100
    12100100
    13100100
    14100100
    1589100
    Table 6. Model recognition results of the mechanical knock events[46]
    Distance/mError accuracy(±1 m)/%Error accuracy(±2 m)/%Threat levelAccuracy rate/%
    2868695.3
    3100100
    4100100
    6336665.8
    8100100
    11808085
    13100100
    1580100
    176060
    Table 7. Location results of the mechanical excavation[46]
    Huijuan Wu, Xinyu Liu, Yunjiang Rao. Processing and Application of Fiber Optic Distributed Sensing Signal Based on Φ-OTDR[J]. Laser & Optoelectronics Progress, 2021, 58(13): 1306003
    Download Citation