000150268 001__ 150268
000150268 005__ 20250201165030.0
000150268 0247_ $$2doi$$a10.1109/LRA.2020.3007440
000150268 0248_ $$2sideral$$a119105
000150268 037__ $$aART-2020-119105
000150268 041__ $$aeng
000150268 100__ $$0(orcid)0000-0003-4638-4655$$aAlonso, I.$$uUniversidad de Zaragoza
000150268 245__ $$a3D-MiniNet: Learning a 2D Representation from Point Clouds for Fast and Efficient 3D LIDAR Semantic Segmentation
000150268 260__ $$c2020
000150268 5060_ $$aAccess copy available to the general public$$fUnrestricted
000150268 5203_ $$aLIDAR semantic segmentation is an essential task that provides 3D semantic information about the environment to robots. Fast and efficient semantic segmentation methods are needed to match the strong computational and temporal restrictions of many real-world robotic applications. This work presents 3D-MiniNet, a novel approach for LIDAR semantic segmentation that combines 3D and 2D learning layers. It first learns a 2D representation from the raw points through a novel projection which extracts local and global information from the 3D data. This representation is fed to an efficient 2D Fully Convolutional Neural Network (FCNN) that produces a 2D semantic segmentation. These 2D semantic labels are re-projected back to the 3D space and enhanced through a post-processing module. The main novelty in our strategy relies on the projection learning module. Our detailed ablation study shows how each component contributes to the final performance of 3D-MiniNet. We validate our approach on well known public benchmarks (SemanticKITTI and KITTI), where 3D-MiniNet gets state-of-the-art results while being faster and more parameter-efficient than previous methods.
000150268 536__ $$9info:eu-repo/grantAgreement/ES/DGA-FSE/T45-17R$$9info:eu-repo/grantAgreement/ES/MCIU-AEI/RTC-2017-6421-7$$9info:eu-repo/grantAgreement/ES/MICIU-AEI-FEDER/PGC2018-098817-A-I00$$9info:eu-repo/grantAgreement/ES/MICIU/PID2019-105390RB-I00
000150268 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000150268 590__ $$a3.741$$b2020
000150268 591__ $$aROBOTICS$$b9 / 28 = 0.321$$c2020$$dQ2$$eT1
000150268 592__ $$a1.123$$b2020
000150268 593__ $$aArtificial Intelligence$$c2020$$dQ1
000150268 593__ $$aBiomedical Engineering$$c2020$$dQ1
000150268 593__ $$aComputer Science Applications$$c2020$$dQ1
000150268 593__ $$aMechanical Engineering$$c2020$$dQ1
000150268 593__ $$aControl and Optimization$$c2020$$dQ1
000150268 593__ $$aControl and Systems Engineering$$c2020$$dQ1
000150268 593__ $$aHuman-Computer Interaction$$c2020$$dQ1
000150268 593__ $$aComputer Vision and Pattern Recognition$$c2020$$dQ1
000150268 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000150268 700__ $$0(orcid)0000-0002-6722-5541$$aRiazuelo, L.$$uUniversidad de Zaragoza
000150268 700__ $$0(orcid)0000-0003-1183-349X$$aMontesano, L.$$uUniversidad de Zaragoza
000150268 700__ $$0(orcid)0000-0002-7580-9037$$aMurillo, A.C.$$uUniversidad de Zaragoza
000150268 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000150268 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000150268 773__ $$g5, 4 (2020), 5432-5439$$pIEEE Robot. autom. let.$$tIEEE Robotics and Automation Letters$$x2377-3766
000150268 8564_ $$s893403$$uhttps://zaguan.unizar.es/record/150268/files/texto_completo.pdf$$yPostprint
000150268 8564_ $$s3214439$$uhttps://zaguan.unizar.es/record/150268/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000150268 909CO $$ooai:zaguan.unizar.es:150268$$particulos$$pdriver
000150268 951__ $$a2025-02-01-14:36:40
000150268 980__ $$aARTICLE