• Optoelectronics Letters
  • Vol. 19, Issue 2, 105 (2023)
Lingyu XIONG and Guijin TANG*
Author Affiliations
  • School of Communications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
  • show less
    DOI: 10.1007/s11801-023-2070-9 Cite this Article
    XIONG Lingyu, TANG Guijin. Multi-object tracking based on deep associated features for UAV applications[J]. Optoelectronics Letters, 2023, 19(2): 105 Copy Citation Text show less
    References

    [1] CIAPARRONE G, SANCHEZ F L, TABIK S, et al. Deep learning in video multi-object tracking:a survey[J]. Neurocomputing, 2020, 381:61-88.

    [2] KAMAL R, CHEMMANAM A J, JOSE B A, et al. Construction safety surveillance using machine learning[C]//2020 International Symposium on Networks, Computers and Communications (ISNCC), October 20-22, 2020, Montreal, QC, Canada. New York:IEEE, 2020:1-6.

    [3] XU Y, OSEP A, BAN Y, et al. How to train your deep multi-object tracker[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 13-19, 2020, Seattle, WA, USA. New York: IEEE, 2020:6787-6796.

    [4] BEHRENDT K, NOVAK L, BOTROS R. A deep learning approach to traffic lights:detection, tracking, and classification[C]//2017 IEEE International Conference on Robotics and Automation (ICRA), May 29-June 3, 2017, Singapore, Singapore. New York:IEEE, 2017: 1370-1377.

    [5] PEREIRA R, GARROTE L, BARROS T, et al. A deep learning-based indoor scene classification approach enhanced with inter-object distance semantic features[ C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 27-October 1, 2021, Prague, Czech Republic. New York:IEEE, 2021:32-38.

    [6] WU J, CAO J, SONG L, et al. Track to detect and segment: an online multi-object tracker[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 20-25, 2021, Nashville, TN, USA. New York:IEEE, 2021:12352-12361.

    [7] LIU Y, LI X, BAI T, et al. Multi-object tracking with hard-soft attention network and group-based cost minimization[J]. Neurocomputing, 2021, 447:80-91.

    [8] LIU Q, CHU Q, LIU B, et al. GSM:graph similarity model for multi-object tracking[C]//International Joint Conferences on Artificial Intelligence (IJCAI), January 7-15, 2021, Yokohama, Japan. California:IJCAI, 2020: 530-536.

    [9] ZHOU X, KOLTUN V, KRAHENBUHL P. Tracking objects as points[C]//European Conference on Computer Vision, August 23-28, 2020, Virtual. Cham: Springer, 2020:474-490.

    [10] ZHANG Y, WANG C, WANG X, et al. Fairmot:on the fairness of detection and re-identification in multiple object tracking[J]. International journal of computer vision, 2021, 129(11):3069-3087.

    [11] BRASó G, LEAL-TAIXé L. Learning a neural solver for multiple object tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 13-19, 2020, Seattle, WA, USA. New York:IEEE, 2020:6247-6257.

    [12] HORNAKOVA A, HENSCHEL R, ROSENHAHN B, et al. Lifted disjoint paths with application in multiple object tracking[C]//International Conference on Machine Learning, July 12-18, 2020, Virtual. IMLS, 2020: 4364-4375.

    [13] KIM C, FUXIN L, ALOTAIBI M, et al. Discriminative appearance modeling with multi-track pooling for real-time multi-object tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 20-25, 2021, Nashville, TN, USA. New York:IEEE, 2021:9553-9562.

    [14] WOJKE N, BEWLEY A, PAULUS D. Simple online and realtime tracking with a deep association met ric[C]//2017 IEEE International Conference on Image Processing (ICIP), September 17-20, 2017, Beijing, China. New York:IEEE, 2017:3645-3649.

    [15] SANDLER M, HOWARD A, ZHU M, et al. Mobilenetv2 : inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 18-23, 2018, Salt Lake City, UT, USA. New York:IEEE, 2018:4510-4520.

    [16] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 18-23, 2018, Salt Lake City, UT, USA. New York:IEEE, 2018:7132-7141.

    [17] XIE S, GIRSHICK R, DOLLáR P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 21-26, 2017, Honolulu, HI, USA. New York:IEEE, 2017: 1492-1500.

    [18] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 7-12, 2015, Boston, MA, USA. New York:IEEE, 2015: 1-9.

    [19] FRANKLE J, CARBIN M. The lottery ticket hypothesis: finding sparse, trainable neural networks[EB/OL]. (2018-03-09) [2022-03-13]. https://arxiv.org/abs/1803.03635v5.

    [20] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. (2015-03-09) [2022-03-13]. https://arxiv.org/abs/1503.02531.

    [21] JACOB B, KLIGYS S, CHEN B, et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 18-23, 2018, Salt Lake City, UT, USA. New York:IEEE, 2018:2704-2713.

    [22] HAN K, WANG Y, TIAN Q, et al. Ghostnet:more features from cheap operations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 13-19, 2020, Seattle, WA, USA. New York:IEEE, 2020:1580-1589.

    [23] LIU X, LIU W, MEI T, et al. A deep learning-based approach to progressive vehicle re-identification for urban surveillance[C]//European Conference on Computer Vision, October 8-16, 2016, Amsterdam, The Netherlands. Cham:Springer, 2016:869-884.

    [24] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 27-30, 2016, Las Vegas, NV, USA. New York:IEEE, 2016:770-778.

    [25] HOWARD A G, ZHU M, CHEN B, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications[EB/OL]. (2017-04-17) [2022-03-13]. https://arxiv.org/abs/1704.04861.

    [26] ZHANG X, ZHOU X, LIN M, et al. Shufflenet:an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 18-23, 2018, Salt Lake City, UT, USA. New York: IEEE, 2018:6848-6856.

    [27] DU D, QI Y, YU H, et al. The unmanned aerial vehicle benchmark : object detection and tracking[C]//Proceedings of the European Conference on Computer Vision, September 8-14, 2018, Munich, Germany. Cham: Springer, 2018:370-386.

    [28] DAI J, LI Y, HE K, et al. R-FCN:object detection via region-based fully convolutional networks[J]. Advances in neural information processing systems, 2016, 29.

    [29] LIU W, ANGUELOV D, ERHAN D, et al. SSD:single shot multibox detector[C]//European Conference on Computer Vision, October 8-16, 2016, Amsterdam, The Netherlands. Cham:Springer, 2016:21-37.

    [30] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28:91-99.

    XIONG Lingyu, TANG Guijin. Multi-object tracking based on deep associated features for UAV applications[J]. Optoelectronics Letters, 2023, 19(2): 105
    Download Citation