[1] 赵恩喆. 基于光流法与目标跟踪网络结合的视频行人跟踪研究(2022).
E ZH ZHAO. Research on Video Pedestrian Tracking Based on Optical Flow Method and Target Tracking Network(2022).
[2] M XU, W LU, CH FANG et al. Target tracking algorithm combining optical flow features and saliency detection. Computer Applications and Software, 41, 164-171, 187(2024).
徐萌, 路稳, 方澄. 融合光流特征和显著性检测的目标跟踪算法. 计算机应用与软件, 41, 164-171, 187(2024).
[3] H Y ZHANG, P Y HE, X W PENG. Multi-object pedestrian tracking method based on improved high resolution neural network. Opt. Precision Eng., 31, 860-871(2023).
张红颖, 贺鹏艺, 彭晓雯. 基于改进高分辨率神经网络的多目标行人跟踪. 光学 精密工程, 31, 860-871(2023).
[4] K TERADA. A general framework for Multi-Human tracking using Kalman filter and fast mean shift algorithms. J. Univers. Comput. Sci., 16, 921-937(2010).
[5] T VOJIR, J NOSKOVA, J MATAS. Robust scale-adaptive mean-shift for tracking. Pattern Recognition Letters, 49, 250-258(2014).
[6] 胡亮, 杨德贵, 王行. 基于改进MEANSHIFT的可见光低小慢目标跟踪算法. 信号处理, 38, 824-834(2022).
L HU, D G YANG, X WANG et al. Visible light low-small-slow-target tracking algorithm based on improved MEANSHIFT. Journal of Signal Processing, 38, 824-834(2022).
[7] D S BOLME, J R BEVERIDGE, B A DRAPER et al. Visual object tracking using adaptive correlation filters, 2544-2550(2010).
[8] J F HENRIQUES, R CASEIRO, P MARTINS et al. Exploiting the circulant structure of tracking-by-detection with kernels, 702-715(2012).
[9] CH DONG, B ZHENG, B LI et al. Shiptarget tracking with improved kernelized correlation filters. Opt. Precision Eng., 27, 911-921(2019).
董超, 郑兵, 李彬. 改进核相关滤波器的海上船只目标跟踪. 光学 精密工程, 27, 911-921(2019).
[10] J F HENRIQUES, R CASEIRO, P MARTINS et al. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 583-596(2015).
[11] Y LI, J K ZHU. A Scale Adaptive Kernel Correlation Filter Tracker with Feature Integration, 254-265(2015).
[12] M DANELLJAN, G HAGER, F S KHAN et al. Discriminative scale space tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1561-1575(2017).
[13] L BERTINETTO, J VALMADRE, S GOLODETZ et al. Staple: complementary learners for real-time tracking, 1401-1409(2016).
[14] J ZHANG, S MA, S SCLAROFF. MEEM: robust tracking via multiple experts using entropy minimization. Zurich(2014).
[15] S HARE, S GOLODETZ, A SAFFARI et al. Struck: structured output tracking with kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 2096-2109(2016).
[16] Z KALAL, K MIKOLAJCZYK, J MATAS. Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34, 1409-1422(2012).
[17] H K GALOOGAHI, A FAGG, S LUCEY. Learning background-aware correlation filters for visual tracking, 1144-1152(2017).
[18] J VALMADRE, L BERTINETTO, J HENRIQUES et al. End-to-end representation learning for correlation filter based tracking, 5000-5008(2017).
[19] L BERTINETTO, J VALMADRE, J F HENRIQUES et al. Fully-convolutional siamese networks for object tracking, F(2016).
[20] K YANG, H J ZHANG, J Y SHI et al. BANDT: a border-aware network with deformable transformers for visual tracking. IEEE Transactions on Consumer Electronics, 69, 377-390(2023).
[21] L T LIN, H FAN, Y XU et al. SwinTrack: a simple and strong baseline for transformer tracking. Advances in Neural Information Processing Systems, 35, 16743-16754(2022).
[22] O CETINTAS, G BRASÓ, L LEAL-TAIXÉ. Unifying short and long-term tracking with graph hierarchies, 22877-22887(2023).
[23] G MAGGIOLINO, A AHMAD, J K CAO et al. Deep OC-sort: multi-pedestrian tracking by adaptive re-identification, 3025-3029(2023).
[24] Y H DU, Z C ZHAO, Y SONG et al. StrongSORT: make DeepSORT great again. IEEE Transactions on Multimedia, 25, 8725-8737(1809).
[25] V KIM, G JUNG, S W LEE. AM-SORT: adaptable motion predictor with historical trajectory embedding for multi-object tracking. arXiv preprint(2024).
[26] Y WU, M H YANG. Online object tracking: a benchmark, 2411-2418(2013).
[27] G HUA, H JÉGOU.
[28] S Y CHENG, B N ZHONG, G R LI et al. Learning to filter: Siamese relation network for robust tracking, 4419-4429(2021).