000162110 001__ 162110
000162110 005__ 20251017144612.0
000162110 0247_ $$2doi$$a10.1109/TVCG.2025.3562696
000162110 0248_ $$2sideral$$a144709
000162110 037__ $$aART-2025-144709
000162110 041__ $$aeng
000162110 100__ $$aCollado, José A.
000162110 245__ $$aVirtualized Point Cloud Rendering
000162110 260__ $$c2025
000162110 5060_ $$aAccess copy available to the general public$$fUnrestricted
000162110 5203_ $$aRemote sensing technologies, such as LiDAR, produce billions of points that commonly exceed the storage capacity of the GPU, restricting their processing and rendering. Level of detail (LoD) techniques have been widely investigated, but building the LoD structures is also time-consuming. This study proposes a GPU-driven culling system focused on determining the number of points visible in every frame. It can manipulate point clouds of any arbitrary size while maintaining a low memory footprint in both the CPU and GPU. Instead of organizing point clouds into hierarchical data structures, these are split into groups of points sorted using the Hilbert encoding. This alternative alleviates the occurrence of anomalous groups found in Morton curves. Instead of keeping the entire point cloud in the GPU, points are transferred on demand to ensure real-time capability. Accordingly, our solution can manipulate huge point clouds even in commodity hardware with low memory capacities. Moreover, hole filling is implemented to cover the gaps derived from insufficient density and our LoD system. Our proposal was evaluated with point clouds of up to 18 billion points, achieving an average of 80 frames per second (FPS) without perceptible quality loss. Relaxing memory constraints further enhances visual quality while maintaining an interactive frame rate. We assessed our method on real-world data, comparing it against three state-ofthe- art methods, demonstrating its ability to handle significantly larger point clouds. The code is available on https://github.com/Krixtalx/Nimbus.
000162110 536__ $$9info:eu-repo/grantAgreement/ES/MICIU/JDC2023-051785-I$$9info:eu-repo/grantAgreement/ES/MICIU/PID2021-126339OB-I00$$9info:eu-repo/grantAgreement/ES/MICIU/PID2022-137938OA-I00$$9info:eu-repo/grantAgreement/ES/MICIU/TED2021-132120B-I00
000162110 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttps://creativecommons.org/licenses/by/4.0/deed.es
000162110 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000162110 700__ $$0(orcid)0000-0003-1423-9496$$aLópez, Alfonso$$uUniversidad de Zaragoza
000162110 700__ $$aJurado, Juan M.
000162110 700__ $$aJiménez, J. Roberto
000162110 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000162110 773__ $$g(2025), [14 pp.]$$pIEEE trans. vis. comput. graph.$$tIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS$$x1077-2626
000162110 8564_ $$s1613225$$uhttps://zaguan.unizar.es/record/162110/files/texto_completo.pdf$$yVersión publicada
000162110 8564_ $$s3640104$$uhttps://zaguan.unizar.es/record/162110/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000162110 909CO $$ooai:zaguan.unizar.es:162110$$particulos$$pdriver
000162110 951__ $$a2025-10-17-14:17:54
000162110 980__ $$aARTICLE