• Optics and Precision Engineering
  • Vol. 31, Issue 5, 667 (2023)
Haihong XIAO1, Qiuxia WU2, Yuqiong LI3, and Wenxiong KANG1,*
Author Affiliations
  • 1School of Automation Science and Engineering, South China University of Technology, Guangzhou50640, China
  • 2School of Software of Engineering, South China University of Technology, Guangzhou510006, China
  • 3National Microgravity Laboratory, Insituate of Mechanics, Chinese Academy of Sciences, Beijing100190, China
  • show less
    DOI: 10.37188/OPE.20233105.0667 Cite this Article
    Haihong XIAO, Qiuxia WU, Yuqiong LI, Wenxiong KANG. Key techniques for three-dimensional completion: a review[J]. Optics and Precision Engineering, 2023, 31(5): 667 Copy Citation Text show less
    References

    [1] 1陈俊英, 白童垚, 赵亮. 互注意力融合图像和点云数据的3D目标检测[J]. 光学 精密工程, 2021, 29(9): 2247-2254. doi: 10.37188/OPE.20212909.2247CHENJ Y, BAIT Y, ZHAOL. 3D object detection based on fusion of point cloud and image by mutual attention[J]. Opt. Precision Eng., 2021, 29(9): 2247-2254.(in Chinese). doi: 10.37188/OPE.20212909.2247

    [2] 2杨军, 张敏敏. 利用模型相似性的三维模型簇协同分割[J]. 光学 精密工程, 2021, 29(10): 2504-2516. doi: 10.37188/OPE.20212910.2504YANGJ, ZHANGM M. Co-segmentation of three-dimensional shape clusters by shape similarity[J]. Opt. Precision Eng., 2021, 29(10): 2504-2516.(in Chinese). doi: 10.37188/OPE.20212910.2504

    [3] 3刘东生, 陈建林, 费点, 等. 基于深度相机的大场景三维重建[J]. 光学 精密工程, 2020, 28(1): 234-243. doi: 10.3788/ope.20202801.0234LIUD S, CHENJ L, FEID, et al. Three-dimensional reconstruction of large-scale scene based on depth camera[J]. Opt. Precision Eng., 2020, 28(1): 234-243.(in Chinese). doi: 10.3788/ope.20202801.0234

    [4] T W YIN, X Y ZHOU, P KRÄHENBÜHL. Center-based 3D object detection and tracking, 11779-11788(2021).

    [5] H S CHEN, P C WANG, F WANG et al. EPro-PnP: generalized end-to-end probabilistic perspective-n-points for monocular object pose estimation, 2771-2780(2022).

    [6] Y ASSAEL, T SOMMERSCHIELD, J PRAG. Restoring ancient text using deep learning:a case study on Greek epigraphy(2019).

    [7] Y L XIU, J L YANG, D TZIONAS et al. ICON: implicit clothed humans obtained from normals, 13286-13296(2022).

    [8] S MYSTAKIDIS. Metaverse. Encyclopedia, 2, 486-497(2022).

    [9] L ROLDAO, R DE CHARETTE, A VERROUST-BLONDET. 3D semantic scene completion: a survey. International Journal of Computer Vision, 1-28(2022).

    [10] W T YUAN, T KHOT, D HELD et al. PCN: point completion network, 728-737(2018).

    [11] N J MITRA, L J GUIBAS, M PAULY. Partial and approximate symmetry detection for 3D geometry. ACM Transactions on Graphics, 25, 560-568(2006).

    [12] N J MITRA, M PAULY, M WAND et al. Symmetry in 3d geometry: Extraction and applications. Computer Graphics Forum, 32, 1-23(2013).

    [13] M KAZHDAN, M BOLITHO, H HOPPE. Poisson surface reconstruction, 7(2006).

    [14] S LEE, G WOLBERG, S Y SHIN. Scattered data interpolation with multilevel B-splines. IEEE Transactions on Visualization and Computer Graphics, 3, 228-244(1997).

    [15] J R PRICE, M H HAYES. Resampling and reconstruction with fractal interpolation functions. IEEE Signal Processing Letters, 5, 228-230(1998).

    [16] M KAZHDAN, H HOPPE. Screened Poisson surface reconstruction. ACM Transactions on Graphics, 32, 1-13(2013).

    [17] C H SHEN, H FU, K CHEN et al. Structure recovery by part assembly. ACM Transactions on Graphics (TOG), 31, 1-11(2012).

    [18] Y Y LI, A DAI, L GUIBAS et al. Database-assisted object retrieval for real-time 3D reconstruction. Computer Graphics Forum, 34, 435-446(2015).

    [19] J ROCK, T GUPTA, J THORSEN et al. Completing 3D object shape from one depth image, 2484-2493(2015).

    [20] B SUN, V G KIM, N AIGERMAN et al. PatchRD: detail-preserving shape completion by Learning patch retrieval and Deformation, 503-522(2022).

    [21] P ACHLIOPTAS, O DIAMANTI, I MITLIAGKAS et al. Learning representations and generative models for 3d point clouds, 40-49(2018).

    [22] Y Q YANG, C FENG, Y R SHEN et al. FoldingNet: point cloud auto-encoder via deep grid deformation. arXiv(2017). https://arxiv.org/abs/1712.07262

    [23] L P TCHAPMI, V KOSARAJU, H REZATOFIGHI et al. TopNet: structural point cloud decoder, 383-392(2019).

    [24] Z T HUANG, Y K YU, J W XU et al. PF-net: point fractal network for 3D point cloud completion, 7659-7667(2020).

    [25] A DAI, D RITCHIE, M BOKELOH et al. ScanComplete: large-scale scene completion and semantic segmentation for 3D scans, 4578-4587(2018).

    [26] A MONSZPART, N MELLADO, G J BROSTOW et al. RAPter:rebuilding man-made scenes with regular arrangements of planes. ACM Transactions on Graphics, 1, 12(2015).

    [27] W ZHAO, S M GAO, H W LIN. A robust hole-filling algorithm for triangular mesh. The Visual Computer, 23, 987-997(2007).

    [28] A AVETISYAN, M DAHNERT, A DAI et al. Scan2CAD: learning CAD model alignment in RGB-D scans, 2609-2618(2019).

    [29] A AVETISYAN, A DAI, M NIESSNER. End-to-end CAD model retrieval and 9DoF alignment in 3D scans, 2551-2560(2019).

    [30] M DAHNERT, A DAI, L GUIBAS et al. Joint embedding of 3D scan and CAD objects, 8748-8757(2019).

    [31] M FIRMAN, O M AODHA, S JULIER et al. Structured prediction of unobserved voxels from a single depth image, 5431-5440(2016).

    [32] A DAI, C DILLER, M NIESSNER. SG-NN: sparse generative neural networks for self-supervised scene completion of RGB-D scans, 846-855(2020).

    [33] S R SONG, F YU, A ZENG et al. Semantic scene completion from a single depth image, 190-198(2017).

    [34] I ARMENI, A R ZAMIR et al. Joint 2D-3D-semantic data for indoor scene understanding. arXiv(2017). https://arxiv.org/abs/1702.01105

    [35] J BEHLEY, M GARBADE, A MILIOTO et al. SemanticKITTI: a dataset for semantic scene understanding of LiDAR sequences, 9296-9306(2019).

    [36] D GRIFFITHS, J BOEHM. SynthCity: a large scale synthetic point cloud. arXiv(2019). https://arxiv.org/abs/1907.04758

    [37] Y C PAN, B GAO, J L MEI et al. SemanticPOSS: a point cloud dataset with large quantity of dynamic instances, 687-693(2020).

    [38] Y X GUO, X TONG. View-volume network for semantic scene completion from a single depth image. arXiv(2018). https://arxiv.org/abs/1806.05361

    [39] L ZHANG. Semantic scene completion with dense CRF from a single depth image. Neurocomputing, 318, 182-195(2018).

    [40] J H ZHANG, H ZHAO, A B YAO et al. Efficient semantic scene completion network with spatial group convolution, 733-749(2018).

    [41] P P ZHANG, W LIU, Y J LEI et al. Cascaded context pyramid for full-resolution 3D semantic scene completion, 7800-7809(2019).

    [42] Y L GUO, H Y WANG, Q Y HU et al. Deep learning for 3D point clouds: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43, 4338-4364(2021).

    [43] E ARNOLD, O Y AL-JARRAH, M DIANATI et al. A survey on 3D object detection methods for autonomous driving applications. IEEE Transactions on Intelligent Transportation Systems, 20, 3782-3795(2019).

    [44] H Y XIU, P VINAYARAJ, K S KIM et al. 3D semantic segmentation for high-resolution aerial survey derived point clouds using deep learning, 588-591(2018).

    [45] Y W JIN, D Q JIANG, M CAI. 3D reconstruction using deep learning: a survey. Communications in Information and Systems, 20, 389-413(2020).

    [46] G BRESSON, Z ALSAYED, L YU et al. Simultaneous localization and mapping: a survey of current trends in autonomous driving. IEEE Transactions on Intelligent Vehicles, 2, 194-220(2017).

    [47] B FEI, W D YANG, W M CHEN et al. Comprehensive review of deep learning-based 3D point cloud completion processing and analysis. IEEE Transactions on Intelligent Transportation Systems, 23, 22862-22883(2022).

    [48] A X CHANG, T FUNKHOUSER, L GUIBAS et al. Shapenet: An information-rich 3d model repository. arXiv(2015). http://arXiv.org/abs/1512.03012

    [49] R OSADA, T FUNKHOUSER, B CHAZELLE et al. Matching 3D models with shape distributions, 154-166(2001).

    [50] A GEIGER, P LENZ, C STILLER et al. Vision meets robotics: the KITTI dataset. The International Journal of Robotics Research, 32, 1231-1237(2013).

    [51] A DAI, A X CHANG, M SAVVA et al. ScanNet: richly-annotated 3D reconstructions of indoor scenes, 2432-2443(2017).

    [52] A CHANG, A DAI, T FUNKHOUSER et al. Matterport3D: learning from RGB-D data in indoor environments, 667-676(2017).

    [53] F BOGO, J ROMERO, G PONS-MOLL et al. Dynamic FAUST: registering human bodies in motion, 5573-5582(2017).

    [54] F OFLI, R CHAUDHRY, G KURILLO et al. Berkeley MHAD: a comprehensive multimodal human action database, 53-60(2013).

    [55] W Y WANG, Q G HUANG, S Y YOU et al. Shape inpainting using 3D generative adversarial network and recurrent convolutional networks, 2317-2325(2017).

    [56] O LITANY, A BRONSTEIN, M BRONSTEIN et al. Deformable shape completion with graph convolutional autoencoders, 1886-1895(2018).

    [57] D SHU, S W PARK, J KWON. 3D point cloud generative adversarial network based on tree structured graph convolutions, 3858-3867(2019).

    [58] T WU, L PAN, J ZHANG et al. Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion. arXiv(2021). http://arXiv.org/abs/2111.12702

    [59] Z Q CHEN, H ZHANG. Learning implicit fields for generative shape modeling, 5932-5941(2019).

    [60] J Z ZHANG, X Y CHEN, Z A CAI et al. Unsupervised 3D shape completion through GAN inversion, 1768-1777(2021).

    [61] R Q CHARLES, S HAO, K C MO et al. PointNet: deep learning on point sets for 3D classification and segmentation, 77-85(2017).

    [62] Z R WU, S R SONG, A KHOSLA et al. 3D ShapeNets: a deep representation for volumetric shapes, 1912-1920(2015).

    [63] J MASCI, D BOSCAINI, M M BRONSTEIN et al. Geodesic convolutional neural networks on Riemannian manifolds, 832-840(2015).

    [64] L MESCHEDER, M OECHSLE, M NIEMEYER et al. Occupancy networks: learning 3D reconstruction in function space, 4455-4465(2019).

    [65] J J PARK, P FLORENCE, J STRAUB et al. DeepSDF: learning continuous signed distance functions for shape representation, 165-174(2019).

    [66] B MILDENHALL, P P SRINIVASAN, M TANCIK et al. NeRF: representing scenes as neural radiance fields for view synthesis, 405-421(2020).

    [67] M LIU, L SHENG, S YANG et al. Morphing and sampling network for dense point cloud completion, 34, 11596-11603(2020).

    [68] J S TANG, Z J GONG, R YI et al. LAKe-net: topology-aware point cloud completion by localizing aligned keypoints, 1716-1725(2022).

    [69] A DAI, C R QI, M NIEßNER. Shape completion using 3D-encoder-predictor CNNs and shape synthesis, 6545-6554(2017).

    [70] H Z XIE, H X YAO, S C ZHOU et al. GRNet Gridding Residual Network for Dense Point Cloud Completion. Computer Vision-ECCV 2020, 365-381(2020).

    [71] X G WANG, M H ANG, G H LEE. Voxel-based network for shape completion by leveraging edge generation, 13169-13178(2021).

    [72] S LIU, D D LI, W H HUANG et al. MRAC-net: multi-resolution anisotropic convolutional network for 3D point cloud completion, 403-414(2021).

    [73] Y N ZHANG, D HUANG, Y H WANG. PC-RGNN: point cloud completion and graph neural network for 3D object detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35, 3430-3437(2021).

    [74] L PAN. ECG: edge-aware point cloud completion with graph convolution. IEEE Robotics and Automation Letters, 5, 4392-4398(2020).

    [75] J Q SHI, L Y XU, L HENG et al. Graph-guided deformation for point cloud completion. IEEE Robotics and Automation Letters, 6, 7081-7088(2021).

    [76] Y J CAI, K Y LIN, C ZHANG et al. Learning a structured latent space for unsupervised point cloud completion, 5533-5543(2022).

    [77] M SARMAD, H J LEE, Y M KIM. RL-GAN-net: a reinforcement learning agent controlled GAN network for real-time point cloud shape completion, 5891-5900(2019).

    [78] X G WANG, M H ANG, G H LEE. Cascaded refinement network for point cloud completion, 787-796(2020).

    [79] T HU, Z Z HAN, A SHRIVASTAVA et al. Render4Completion: synthesizing multi-view depth maps for 3D shape completion, 4114-4122(2019).

    [80] C L XIE, C X WANG, B ZHANG et al. Style-based point generator with adversarial rendering for point cloud completion, 4617-4626(2021).

    [81] X WEN, Z Z HAN, Y P CAO et al. Cycle4Completion: unpaired point cloud completion using cycle transformation with missing region coding, 13075-13084(2021).

    [82] X M YU, Y M RAO, Z Y WANG et al. PoinTr: diverse point cloud completion with geometry-aware transformers, 12478-12487(2021).

    [83] W ZHANG, H ZHOU, Z DONG et al. Point cloud completion via skeleton-detail transformer. IEEE Transactions on Visualization and Computer Graphics(2022).

    [84] L PAN, X Y CHEN, Z A CAI et al. Variational relational point completion network, 8520-8529(2021).

    [85] X C ZHANG, Y T FENG, S Q LI et al. View-guided point cloud completion, 15885-15894(2021).

    [86] J KOUSHIK. Understanding convolutional neural networks. arXiv(2016). http://arXiv.org/abs/1605.09081

    [87] Y WANG, Y B SUN, Z W LIU et al. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics, 38, 1-12(2019).

    [88] W JIN, L ZHAO, S ZHANG et al. Graph condensation for graph neural networks. arXiv(2021). http://arXiv.org/abs/2110.07580

    [89] M ARJOVSKY, S CHINTALA, L BOTTOU. Wasserstein generative adversarial networks, 214-223(2017).

    [90] K HAN, Y WANG, H CHEN et al. A survey on vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence(2022).

    [91] H S ZHAO, L JIANG, J Y JIA et al. Point transformer, 16239-16248(2021).

    [92] M H GUO, J X CAI, Z N LIU et al. PCT: point cloud transformer. Computational Visual Media, 7, 187-199(2021).

    [93] R H LI, X Z LI, P A HENG et al. Point cloud upsampling via disentangled refinement, 344-353(2021).

    [94] A ZENG, S R SONG, M NIEßNER et al. 3DMatch: learning local geometric descriptors from RGB-D reconstructions, 199-208(2017).

    [95] P S WANG, Y LIU, X TONG. Deep octree-based CNNs with output-guided skip connections for 3D shape and scene completion, 1074-1081(2020).

    [96] D AZINOVIĆ, R MARTIN-BRUALLA, D B GOLDMAN et al. Neural RGB-D surface reconstruction, 6280-6291(2022).

    [97] A DAI, Y SIDDIQUI, J THIES et al. SPSG: self-supervised photometric scene generation from RGB-D scans, 1747-1756(2021).

    [98] H X CHEN, J H HUANG, T J MU et al. CIRCLE: convolutional implicit reconstruction andCompletion forLarge-scale indoor scene(2022).

    [99] B CURLESS, M LEVOY. A volumetric method for building complex models from range images, 303-312(1996).

    [100] O RONNEBERGER, P FISCHER, T BROX. U-net: convolutional networks for biomedical image segmentation, 234-241(2015).

    [101] X G HAN, Z X ZHANG, D DU et al. Deep reinforcement learning of volume-guided progressive view inpainting for 3D point scene completion from a single depth image, 234-243(2019).

    [102] R A NEWCOMBE, S IZADI, O HILLIGES et al. KinectFusion: Real-time dense surface mapping and tracking, 127-136(2011).

    [103] Y D WANG, D J TAN, N NAVAB et al. ForkNet: multi-branch volumetric semantic completion from a single depth image, 8607-8616(2019).

    [104] X K CHEN, Y J XING, G ZENG. Real-time semantic scene completion via feature aggregation and conditioned prediction, 2830-2834(2020).

    [105] J LI, Y LIU, X YUAN et al. Depth based semantic scene completion with position importance aware loss. IEEE Robotics and Automation Letters, 5, 219-226(2020).

    [106] M GARBADE, Y T CHEN, J SAWATZKY et al. Two stream 3D semantic scene completion, 416-425(2019).

    [107] J LI, Y LIU, D GONG et al. RGBD based dimensional decomposition residual network for 3D semantic scene completion, 7685-7694(2019).

    [108] J LI, K HAN, P WANG et al. Anisotropic convolutional networks for 3D semantic scene completion, 3348-3356(2020).

    [109] Y LIU, J LI, Q YAN et al. 3D gated recurrent fusion for semantic scene completion. arXiv(2020). http://arXiv.org/abs/2002.07269

    [110] Y J CAI, X S CHEN, C ZHANG et al. Semantic scene completion via integrating instances and scene in-the-loop, 324-333(2021).

    [111] S Q LI, C Q ZOU, Y P LI et al. Attention-based multi-modal fusion network for semantic scene completion. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 11402-11409(2020).

    [112] W ZHANG, G L LIU, G H TIAN. HHA-based CNN image features for indoor loop closure detection, 4634-4639(2017).

    [113] S LIU, Y HU, Y ZENG et al. See and think: Disentangling semantic scene completion. Advances in Neural Information Processing Systems, 31(2018).

    [114] R CHENG, C AGIA, Y REN et al. S3CNet: A Sparse Semantic Scene Completion Network for LiDAR Point Clouds. Conference on Robot Learning. PMLR, 2148-2161(2021).

    [115] X YAN, J T GAO, J LI et al. Sparse single sweep LiDAR point cloud segmentation via learning contextual shape priors from scene completion. Proceedings of the AAAI Conference on Artificial Intelligence, 35, 3101-3109(2021).

    [116] M ZHONG, G ZENG. Semantic Point Completion Network for 3D Semantic Scene Completion, 2824-2831(2020).

    [117] C B RIST, D EMMERICHS, M ENZWEILER et al. Semantic scene completion using local deep implicit functions on LiDAR data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 7205-7218(2022).

    [118] Y D WANG, D J TAN, N NAVAB et al. Adversarial semantic scene completion from a single depth image, 426-434(2018).

    [119] X K CHEN, K Y LIN, C QIAN et al. 3D sketch-aware semantic scene completion via semi-supervised structure prior, 4192-4201(2020).

    [120] A DOURADO, T E DE CAMPOS, H KIM et al. EdgeNet: Semantic scene completion from a single RGB-D image, 503-510(2021).

    [121] A DOURADO, F GUTH, T CAMPOS DE. Data augmented 3D semantic scene completion with 2D segmentation priors, 687-696(2022).

    [122] K M HE, X Y ZHANG, S Q REN et al. Deep residual learning for image recognition, 770-778(2016).

    [123] J KU, A HARAKEH, S L WASLANDER. In defense of classical image processing: fast depth completion on the CPU, 16-22(2018).

    [124] X WEN, T Y LI, Z Z HAN et al. Point cloud completion by skip-attention network with hierarchical folding, 1936-1945(2020).

    [125] B GRAHAM, M ENGELCKE, L VAN DER MAATEN. 3D semantic segmentation with submanifold sparse convolutional networks, 9224-9232(2018).

    [126] C CHOY, J GWAK, S SAVARESE. 4D spatio-temporal ConvNets: minkowski convolutional neural networks, 3070-3079(2019).

    [127] X L SUN, A HASSANI, Z Y WANG et al. DiSparse: disentangled sparsification for multitask model compression, 12372-12382(2020).

    Haihong XIAO, Qiuxia WU, Yuqiong LI, Wenxiong KANG. Key techniques for three-dimensional completion: a review[J]. Optics and Precision Engineering, 2023, 31(5): 667
    Download Citation