Fredsnet: joint monocular depth and semantic segmentation with fast fourier convolutions from single panoramas
Resumen: In this work we present FreDSNet, a deep learning solution which obtains semantic 3D understanding of indoor environments from single panoramas. Omnidirectional images reveal task-specific advantages when addressing scene understanding problems due to the 360-degree contextual information about the entire environment they provide. However, the inherent characteristics of the omnidirectional images add additional problems to obtain an accurate detection and segmentation of objects or a good depth estimation. To overcome these problems, we exploit convolutions in the frequential domain obtaining a wider receptive field in each convolutional layer. These convolutions allow to leverage the whole context information from omnidirectional images. FreDSNet is the first network that jointly provides monocular depth estimation and semantic segmentation from a single panoramic image exploiting fast Fourier convolutions. Our experiments show that FreDSNet has slight better performance than the sole state-of-the-art method that obtains both semantic segmentation and depth estimation from panoramas. FreDSNet code is publicly available in https://github.com/Sbrunoberenguel/FreDSNet
Idioma: Español
DOI: 10.1109/ICRA48891.2023.10161142
Año: 2023
Publicado en: IEEE International Conference on Robotics and Automation (2023), 2152-4092
ISSN: 2152-4092

Financiación: info:eu-repo/grantAgreement/ES/MICINN-AEI/PID2021-125209OB-I00
Financiación: info:eu-repo/grantAgreement/EUR/MICINN/TED2021-129410B-I00
Financiación: info:eu-repo/grantAgreement/ES/UZ/JIUZ-2021-TEC-01
Tipo y forma: Comunicación congreso (PostPrint)
Área (Departamento): Área Ingen.Sistemas y Automát. (Dpto. Informát.Ingenie.Sistms.)

Derechos Reservados Derechos reservados por el editor de la revista


Exportado de SIDERAL (2024-02-07-14:35:52)


Visitas y descargas

Este artículo se encuentra en las siguientes colecciones:
Artículos



 Registro creado el 2024-02-07, última modificación el 2024-02-07


Postprint:
 PDF
Valore este documento:

Rate this document:
1
2
3
 
(Sin ninguna reseña)