[1] 权美香, 朴松昊, 李国. 视觉SLAM综述[J]. 智能系统学报, 2016, 11(6): 768-776. doi: 10.11992/tis.201607026QUANM X, PIAOS H, LIG. Overview of Visual SLAM [J]. CAAI Transactions on Intelligent Systems, 2016, 11(6): 768-776.(in Chinese). doi: 10.11992/tis.201607026
[2] 张裕, 张越, 张宁, 等. 基于逆深度滤波的双目折反射全景相机动态SLAM系统[J]. 光学 精密工程, 2022, 30(11): 1282-1289. doi: 10.37188/ope.20223011.1282ZHANGY, ZHANGY, ZHANGN, et al. Dynamic SLAM of binocular catadioptric panoramic camera based on inverse depth filter[J]. Opt. Precision Eng., 2022, 30(11): 1282-1289.(in Chinese). doi: 10.37188/ope.20223011.1282
[3] 赵良玉, 金瑞, 朱叶青, 等. 基于点线特征融合的双目惯性SLAM算法[J]. 航空学报, 2022, 43(3): 355-369. doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029ZHAOL Y, JINR, ZHUY Q, et al. Stereo visual-inertial SLAM algorithm based on merge of point and line features[J]. Acta Aeronautica et Astronautica Sinica, 2022, 43(3): 355-369.(in Chinese). doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029
[4] 周佳乐, 朱兵, 吴芝路. 融合二维图像和三维点云的相机位姿估计[J]. 光学 精密工程, 2022, 30(22): 2901-2912. doi: 10.37188/ope.20223022.2901ZHOUJ L, ZHUB, WUZH L. Camera pose estimation based on 2D image and 3D point cloud fusion[J]. Opt. Precision Eng., 2022, 30(22): 2901-2912.(in Chinese). doi: 10.37188/ope.20223022.2901
[5] 贾晓雪, 赵冬青, 张乐添, 等. 基于自适应惯导辅助特征匹配的视觉SLAM算法[J]. 光学 精密工程, 2023, 31(5): 621-630. doi: 10.37188/OPE.20233105.0621JIAX X, ZHAOD Q, ZHANGL T, et al. A visual SLAM algorithm based on adaptive inertial navigation assistant feature matching[J]. Opt. Precision Eng., 2023, 31(5): 621-630.(in Chinese). doi: 10.37188/OPE.20233105.0621
[6] 李海丰, 胡遵河, 陈新伟. PLP-SLAM: 基于点、线、面特征融合的视觉SLAM方法[J]. 机器人, 2017, 39(2): 214-220, 229. doi: 10.13973/j.cnki.robot.2017.0214LIH F, HUZ H, CHENX W. PLP-SLAM: a visual SLAM method based on point-line-plane feature fusion[J]. Robot, 2017, 39(2): 214-220, 229.(in Chinese). doi: 10.13973/j.cnki.robot.2017.0214
[7] R MUR-ARTAL, J M M MONTIEL, J D TARDOS. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31, 1147-1163(2015).
[8] R MUR-ARTAL, J D TARDÓS. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 33, 1255-1262(2017).
[9] C CAMPOS, R ELVIRA, J J G RODRIGUEZ et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Transactions on Robotics, 37, 1874-1890(2021).
[10] T QIN, P L LI, S J SHEN. VINS-mono: a robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34, 1004-1020(2018).
[11] J ENGEL, V KOLTUN, D CREMERS. Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 611-625(2018).
[12] C FORSTER, M PIZZOLI, D SCARAMUZZA. SVO: fast semi-direct monocular visual odometry, 15-22(2014).
[13] R YUNUS, Y Y LI, F TOMBARI. ManhattanSLAM: robust planar tracking and mapping leveraging mixture of Manhattan frames, 6687-6693(2021).
[14] H WEI, F L TANG, Z W XU et al. A point-line VIO system with novel feature hybrids and with novel line predicting-matching. IEEE Robotics and Automation Letters, 6, 8681-8688(2021).
[15] R GOMEZ-OJEDA, F A MORENO, D ZUNIGA-NOEL et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments. IEEE Transactions on Robotics, 35, 734-746(2019).
[16] J P COMPANY-CORCOLES, E GARCIA-FIDALGO, A ORTIZ. MSC-VO: exploiting Manhattan and structural constraints for visual odometry. IEEE Robotics and Automation Letters, 7, 2803-2810(2022).
[17] R GROMPONE VON GIOI, J JAKUBOWICZ, J M MOREL et al. LSD: a line segment detector. Image Processing On Line, 2, 35-55(2012).
[18] Y J HE, J ZHAO, Y GUO et al. PL-VIO: tightly-coupled monocular visual-inertial odometry using point and line features. Sensors, 18, 1159(2018).
[19] R GOMEZ-OJEDA, J BRIALES, J GONZALEZ-JIMENEZ. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments, 4211-4216(2016).
[20] H WEI, F L TANG, C F ZHANG et al. Highly efficient line segment tracking with an IMU-KLT prediction and a convex geometric distance minimization, 3999-4005(2021).
[21] L L ZHANG, R KOCH. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. Journal of Visual Communication and Image Representation, 24, 794-805(2013).
[22] P KIM, B COLTIN, H J KIM. Low-drift visual odometry in structured environments by decoupling rotational and translational motion, 7247-7253(2018).
[23] T ZHANG, C J LIU, J Q LI et al. A new visual inertial simultaneous localization and mapping (SLAM) algorithm based on point and line features. Drones, 6, 23(2022).
[24] M BURRI, J NIKOLIC, P GOHL et al. The EuRoC micro aerial vehicle datasets. International Journal of Robotics Research, 35, 1157-1163(2016).
[25] M MENZE, A GEIGER. Object scene flow for autonomous vehicles, 7, 3061-3070(2015).