• Acta Optica Sinica
  • Vol. 44, Issue 9, 0915003 (2024)
Bo Zhang1, Yinlong Zhang2,3,4,*, Wei Liang2,3,4, Xin Wang1, and Yutuo Yang2,3,4
Author Affiliations
  • 1School of Electrical and Control Engineering, Shenyang Jianzhu University, Shenyang 110168, Liaoning, China
  • 2Key Laboratory of Networked Control System, Chinese Academy of Sciences, Shenyang 110169, Liaoning, China
  • 3Institute of Robotics and Intelligent Manufacturing Innovation, Chinese Academy of Sciences, Shenyang 110169, Liaoning, China
  • 4State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, Liaoning, China
  • show less
    DOI: 10.3788/AOS231613 Cite this Article Set citation alerts
    Bo Zhang, Yinlong Zhang, Wei Liang, Xin Wang, Yutuo Yang. Warehouse AGV Navigation Based on Multimodal Information Fusion[J]. Acta Optica Sinica, 2024, 44(9): 0915003 Copy Citation Text show less
    References

    [1] Sun P Z H, You J P, Qiu S Q et al. AGV-based vehicle transportation in automated container terminals: a survey[J]. IEEE Transactions on Intelligent Transportation Systems, 24, 341-356(2023).

    [2] Xuan Y W, Zhang H, Yan F J et al. Gaze control for active visual SLAM via panoramic cost map[J]. IEEE Transactions on Intelligent Vehicles, 8, 1813-1825(2023).

    [3] Xu C, Zhou Y J, Luo C. Visual SLAM method based on optical flow and instance segmentation for dynamic scenes[J]. Acta Optica Sinica, 42, 1415002(2022).

    [4] Zhu Y W, Zheng C R, Yuan C J et al. CamVox: a low-cost and accurate lidar-assisted visual slam system[C], 5049-5055(2021).

    [5] Zou Q, Sun Q, Chen L et al. A comparative analysis of LiDAR SLAM-based indoor navigation for autonomous vehicles[J]. IEEE Transactions on Intelligent Transportation Systems, 23, 6907-6921(2022).

    [6] Wang M J, Li L, Yi F et al. Three-dimensional reconstruction and analysis of target laser point cloud data under simulated real water environment[J]. Chinese Journal of Lasers, 49, 0309001(2022).

    [7] Xu Y W, Yan W X, Wu W. Improvement of LiDAR SLAM front-end algorithm based on local map in similar scenes[J]. Robot, 44, 176-185(2022).

    [8] Bhattacharjee A, Bhatt C, Namrata K, Priyadarshi N, Bansal R C et al. Human arm motion capture using IMU sensors[M]. Smart energy and advancement in power technologies. Lecture notes in electrical engineering, 927, 805-817(2023).

    [9] Yi C Z, Rho S, Wei B C et al. Detecting and correcting IMU movements during joint angle estimation[J]. IEEE Transactions on Instrumentation and Measurement, 71, 4004714(2022).

    [10] Liu J X, Gao W, Hu Z Y. Optimization-based visual-inertial SLAM tightly coupled with raw GNSS measurements[C], 11612-11618(2021).

    [11] Golodetz S, Vankadari M, Everitt A et al. Real-time hybrid mapping of populated indoor scenes using a low-cost monocular UAV[C], 325-332(2022).

    [12] Zhou W C, Huang J. Adaptive tightly coupled lidar-visual simultaneous localization and mapping framework[J]. Laser & Optoelectronics Progress, 60, 2028009(2023).

    [13] Zhang J, Singh S. Low-drift and real-time lidar odometry and mapping[J]. Autonomous Robots, 41, 401-416(2017).

    [14] Shan T X, Englot B. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain[C], 4758-4765(2018).

    [15] Ganesan P, Priyadarshini R, Felix Mathan M et al. Autonomous guided vehicle for smart warehousing[C], 42-47(2022).

    [16] Ye H Y, Chen Y Y, Liu M. Tightly coupled 3D lidar inertial odometry and mapping[C], 3144-3150(2019).

    [17] Qin C, Ye H Y, Pranata C E et al. LINS: a lidar-inertial state estimator for robust and efficient navigation[C], 8899-8906(2020).

    [18] Shan T X, Englot B, Meyers D et al. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping[C], 5135-5142(2020).

    [19] Zhang B W, Li S Y, Qiu J T et al. Application and research on improved adaptive Monte Carlo localization algorithm for automatic guided vehicle fusion with QR code navigation[J]. Applied Sciences, 13, 11913(2023).

    [20] Forster C, Carlone L, Dellaert F et al. On-manifold preintegration for real-time visual: inertial odometry[J]. IEEE Transactions on Robotics, 33, 1-21(2017).

    [21] Qin T, Li P L, Shen S J. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 34, 1004-1020(2018).

    [22] Bahnam S, Pfeiffer S, de Croon G C H E. Stereo visual inertial odometry for robots with limited computational resources[C], 9154-9159(2021).

    [23] Xu W, Cai Y X, He D J et al. FAST-LIO2: fast direct LiDAR-inertial odometry[J]. IEEE Transactions on Robotics, 38, 2053-2073(2022).

    [24] Liu Z, Zhang F. BALM: bundle adjustment for lidar mapping[J]. IEEE Robotics and Automation Letters, 6, 3184-3191(2021).

    [25] Barron I R, Sharma G. Optimized modulation and coding for dual modulated QR codes[J]. IEEE Transactions on Image Processing, 32, 2800-2810(2023).

    [26] Li S X, Li G Y, Zhou Y L, Sun J D, Yang C F, Guo S R et al. Real-time dead reckoning and mapping approach based on three-dimensional point cloud[M]. China satellite navigation conference. Lecture notes in electrical engineering, 499, 643-662(2018).

    [27] Cui Y G, Chen X Y, Zhang Y L et al. BoW3D: bag of words for real-time loop closing in 3D LiDAR SLAM[J]. IEEE Robotics and Automation Letters, 8, 2828-2835(2023).

    [28] Zuo X X, Yang Y L, Geneva P et al. LIC-fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking[C], 5112-5119(2020).