Simulation-Based Evaluation of LiDAR–Photogrammetry Fusion via NeRF Reconstruction and ICP Registration in Urban Scenes

Authors

DOI:

https://doi.org/10.58190/ijamec.2026.162

Keywords:

NeRF, LiDAR–Photogrammetry Fusion, Point-Cloud Registration, Simulation, Urban Scenes

Abstract

Reliable integration of LiDAR and photogrammetric point clouds is essential for high-precision 3D mapping, yet systematic evaluations of fusion accuracy under controlled conditions remain limited. This study presents a simulation-based assessment framework for LiDAR–photogrammetry fusion using a neural radiance field (NeRF) representation and iterative closest point (ICP) registration. Experiments were conducted in the CARLA simulator (Town 10), where a drone-mounted multi-sensor platform (RGB, LiDAR, GPS, IMU) generated spatially aligned datasets. Photogrammetric reconstructions were produced using the Nerfstudio–Nerfacto pipeline with varied architectural and sampling configurations to analyze their impact on cross-modal registration. Quantitative evaluation employed Chamfer and cloud-to-cloud (C2C) distances to assess NeRF reconstruction fidelity, and ICP root-mean-square error (RMSE), inlier fitness, and runtime to evaluate registration performance. Results show that tuning NeRF’s hidden dimensions and sampling levels yields up to 25% lower ICP RMSE and faster convergence across object categories. The proposed framework enables reproducible benchmarking of LiDAR–photogrammetry fusion and provides a foundation for extending NeRF-based methods to real-world urban mapping scenarios.

Downloads

Download data is not yet available.

References

[1] T. Ren and H. Jebelli, “Efficient 3D robotic mapping and navigation method in complex construction environments,” Computer‐Aided Civil and Infrastructure Engineering, vol. 40, no. 12, pp. 1580-1605, 2025.

[2] R. Maskeliünas, S. Maqsood, "Lightweight Attention-Based Framework for Semantic Segmentation and Compression of 3D LiDAR Data", SETSCI Conference Proceedings, vol. 22, pp. 7-10, 2025.

[3] P. Chen, X. Zhao, L. Zeng, L. Liu, S. Liu, L. Sun, Z. Li, H. Chen, G. Liu, Z. Qiao, Y. Qu, D. Xu, L. Li, and L. Li, “A Review of Research on SLAM Technology Based on the Fusion of LiDAR and Vision,” Sensors, vol. 25, no. 5, 1447, 2025.

[4] R. Maskeliūnas, S. Maqsood, M. Vaškevičius, and J. Gelšvartas, “Fusing LiDAR and photogrammetry for accurate 3D data: A hybrid approach,” Remote sensing, vol. 17, no. 3, pp. 1-27, 2025.

[5] Y. Ye, J. Shan, L. Bruzzone, and L. Shen, “Robust registration of multimodal remote sensing images based on structural similarity,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 5, pp. 2941-2958, 2017.

[6] K. Istenič, N. Gracias, A. Arnaubec, J. Escartín, and R. Garcia, “Scale accuracy evaluation of image-based 3D reconstruction strategies using laser photogrammetry,” Remote Sensing, vol. 11, no. 18, 2093, 2019.

[7] R. Maskeliūnas, and S. Maqsood, “Hybrid attention-based PTv3-SE model for efficient point cloud segmentation,” Remote Sensing Applications: Society and Environment, vol. 41, 101891, 2026.

[8] M. Gassilloud, B. Koch, and A. Göritz, “Occlusion mapping reveals the impact of flight and sensing parameters on vertical forest structure exploration with cost-effective UAV based laser scanning,” International Journal of Applied Earth Observation and Geoinformation, vol. 139, 104493, 2025.

[9] J. Yang, H. Li, D. Campbell, and Y. Jia, “Go-ICP: A globally optimal solution to 3D ICP point-set registration,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 11, pp. 2241-2254, 2015.

[10] C. H. Lin, W. C. Ma, A. Torralba, and S. Lucey, “Barf: Bundle-adjusting neural radiance fields,” In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5741-5751, 2021.

[11] Y. Chen and G. H. Lee, “DReg-NeRF: Deep Registration for Neural Radiance Fields,” in Proc. IEEE/CVF International Conference on Computer Vision (ICCV), 2023.

[12] A. T. D. S. Ferreira, C. H. Grohmann, M. C. H. Ribeiro, M. S. T. Santos, R. C. de Oliveira, and E. Siegle, “Beach surface model construction: A strategy approach with structure from motion-multi-view stereo,” MethodsX, vol. 12, 102694, 2024.

[13] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99-106, 2021.

[14] Y. Li and X. Xiao, “Deep Learning-Based Fusion of Optical, Radar, and LiDAR Data for Advancing Land Monitoring,” Sensors, vol. 25, no. 16, 4991, 2025.

[15] S. Xu, Q. Xue, Z. Chen, S. Fei, and H. Gao, “Complementary information-guided interactive fusion network for HSI and LiDAR data joint classification,” Expert Systems with Applications, vol. 298, 129549, 2026.

[16] N. Xu, R. Qin, and S. Song, “Point cloud registration for LiDAR and photogrammetric data: A critical synthesis and performance analysis on classic and deep learning algorithms,” ISPRS Open J. Photogramm. Remote Sens., vol. 8, p. 100032, 2023.

[17] K. Ma, F. Yan, S. Li, G. Huang, X. Jia, F. Wang, and L. Chen, “Low-Overlap Registration of Multi-Source LiDAR Point Clouds in Urban Scenes Through Dual-Stage Feature Pruning and Progressive Hierarchical Methods,” Remote Sensing, vol. 17, no. 17, 2938, 2025.

[18] J. Wang and H. Xu, “Cross-modal deep learning framework for 3D reconstruction and information integration of Zhejiang wood carving heritage,” Scientific Reports, vol. 16, 465, 2025.

[19] M. Ramezani, Y. Wang, M. Camurri, D. Wisth, M. Mattamala, and M. Fallon, “The Newer College Dataset: Handheld LiDAR, Inertial and Vision with Ground Truth,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Las Vegas, NV, USA, Oct. 2020, pp. 4353–4360.

[20] R. Maskeliūnas, S. Maqsood, M. Vaškevičius, and J. Gelšvartas, “Hybrid deep learning and geometric algorithms for individual object detection in urban LiDAR point clouds,” International Journal of Remote Sensing, vol. 46, no. 23, pp. 9118–9156, 2025.

[21] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” In Conference on robot learning, pp. 1-16, 2017.

[22] T. Roosendaal and Community, Blender 3.0 User Manual, Blender Foundation, 2021.

Downloads

Published

31-03-2026

Issue

Section

Research Articles

How to Cite

[1]
R. . Maskeliūnas, S. Maqsood, A. . Qurthobi, I. . Abbas, M. . Vaškevičius, and J. . . Gelšvartas, “Simulation-Based Evaluation of LiDAR–Photogrammetry Fusion via NeRF Reconstruction and ICP Registration in Urban Scenes”, J. Appl. Methods Electron. Comput., vol. 14, no. 1, pp. 14–19, Mar. 2026, doi: 10.58190/ijamec.2026.162.

Similar Articles

1-10 of 82

You may also start an advanced similarity search for this article.