• Frontiers of Optoelectronics
  • Vol. 12, Issue 4, 413 (2019)
Chao PENG, Danhua CAO*, Yubin WU, and Qun YANG
Author Affiliations
  • School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074, China
  • show less
    DOI: 10.1007/s12200-019-0862-0 Cite this Article
    Chao PENG, Danhua CAO, Yubin WU, Qun YANG. Robot visual guide with Fourier-Mellin based visual tracking[J]. Frontiers of Optoelectronics, 2019, 12(4): 413 Copy Citation Text show less
    References

    [1] Agrawal A, Sun Y, Barnwell J, Raskar R. Vision-guided robot system for picking objects by casting shadows. International Journal of Robotics Research, 2010, 29(2–3): 155–173

    [2] Taryudi, Wang M S. 3D object pose estimation using stereo vision for object manipulation system. In: Proceedings of International Conference on Applied System Innovation. Sapporo: IEEE, 2017, 1532–1535

    [3] Dinham M, Fang G, Zou J J. Experiments on automatic seam detection for a MIG welding robot. In: Proceedings of International Conference on Artificial Intelligence and Computational Intelligence. Berlin: Springer, 2011, 390–397

    [4] Ryberg A, Ericsson M, Christiansson A K, Eriksson K, Nilsson J, Larsson M. Stereo vision for path correction in off-line programmed robot welding. In: Proceedings of IEEE International Conference on Industrial Technology. Vina del Mar: IEEE, 2010, 1700–1705

    [5] Dinham M, Fang G. Autonomous weld seam identification and localisation using eye-in-hand stereo vision for robotic arc welding. Robotics and Computer-integrated Manufacturing, 2013, 29(5): 288–301

    [6] Yun W H, Lee J, Lee J H, Kim J. Object recognition and pose estimation for modular manipulation system: overview and initial results. In: Proceedings of International Conference on Ubiquitous Robots and Ambient Intelligence. Jeju: IEEE, 2017, 198–201

    [7] Dong L, Yu X, Li L, Hoe J K E. HOG based multi-stage object detection and pose recognition for service robot. In: Proceedings of 11th International Conference on Control Automation Robotics & Vision, Singapore: IEEE, 2010, 2495–2500

    [8] Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016, 4293–4302

    [9] eld D, Thrun S, Savarese S. Learning to track at 100 FPS with deep regression networks. In: Proceedings of European Conference on Computer Vision. Berlin: Springer, 2016, 749–765

    [10] Henriques J F, Rui C, Martins P, Batista J. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596

    [11] Stricker D. Tracking with reference images: a real-time and markerless tracking solution for out-door augmented reality applications. In: Proceedings of Conference on Virtual Reality, Archeology, and Cultural Heritage. Glyfada: DBLP, 2001, 77–82

    [12] Wu Y, Lim J, Yang M H. Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848

    [13] Yang B, Yan J, Lei Z, Li S. Aggregate channel features for multiview face detection. In: Proceedings of IEEE International Joint Conference on Biometrics. Clearwater: IEEE, 2014, 1–8

    [14] Hirschmuller H. Accurate and efficient stereo processing by semiglobal matching and mutual information. In: Proceedings of IEEE Computer Society Conference on Computer Vision & Pattern Recognition. San Diego: IEEE, 2005, 807–814

    [15] Zhang Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330–1334

    [16] Li J, Zhao H, Jiang T, Zhou X. Development of a 3D high-precise positioning system based on a planar target and two CCD cameras. In: Proceedings of International Conference on Intelligent Robotics and Applications. Berlin: Springer, 2008, 475–484

    [17] Magnusson M, Andreasson H, Nüchter A, Lilienthal A J. Appearance-based loop detection from 3D laser data using the normal distributions transform. Journal of Field Robotics, 2010, 26 (11–12): 892–914

    [18] Li Y, Liu G. Learning a scale-and-rotation correlation filter for robust visual tracking. In: Proceedings of IEEE International Conference on Image Processing. Phoenix: IEEE, 2016, 454–458

    [19] Zhang K, Zhang L, Yang M H. Fast Compressive Tracking. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2014, 36 (10):2002-2015.

    [20] Lu H, Jia X, Yang M H. Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence: IEEE, 2012, 1822–1829

    [21] Dinh T B, Vo N, Medioni G. Context tracker: exploring supporters and distracters in unconstrained environments. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs: IEEE, 2011, 1177–1184

    Chao PENG, Danhua CAO, Yubin WU, Qun YANG. Robot visual guide with Fourier-Mellin based visual tracking[J]. Frontiers of Optoelectronics, 2019, 12(4): 413
    Download Citation