000088209 001__ 88209
000088209 005__ 20200716101443.0
000088209 0247_ $$2doi$$a10.1109/TVCG.2019.2898757
000088209 0248_ $$2sideral$$a111449
000088209 037__ $$aART-2019-111449
000088209 041__ $$aeng
000088209 100__ $$0(orcid)0000-0002-7796-3177$$aSerrano, A.$$uUniversidad de Zaragoza
000088209 245__ $$aMotion parallax for 360° RGBD video
000088209 260__ $$c2019
000088209 5060_ $$aAccess copy available to the general public$$fUnrestricted
000088209 5203_ $$aWe present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today''s most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea.
000088209 536__ $$9info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON$$9info:eu-repo/grantAgreement/ES/MINECO/TIN2016-78753-P$$9info:eu-repo/grantAgreement/ES/MINECO/TIN2016-79710-P
000088209 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000088209 590__ $$a4.558$$b2019
000088209 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b10 / 108 = 0.093$$c2019$$dQ1$$eT1
000088209 592__ $$a1.519$$b2019
000088209 593__ $$aComputer Graphics and Computer-Aided Design$$c2019$$dQ1
000088209 593__ $$aSoftware$$c2019$$dQ1
000088209 593__ $$aSignal Processing$$c2019$$dQ1
000088209 593__ $$aComputer Vision and Pattern Recognition$$c2019$$dQ1
000088209 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000088209 700__ $$aKim, I.
000088209 700__ $$aChen, Z.
000088209 700__ $$aDIVerdi, S.
000088209 700__ $$0(orcid)0000-0002-7503-7022$$aGutierrez, D.$$uUniversidad de Zaragoza
000088209 700__ $$aHertzmann, A.
000088209 700__ $$0(orcid)0000-0003-0060-7278$$aMasia, B.$$uUniversidad de Zaragoza
000088209 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000088209 773__ $$g25, 5 (2019), 1817-1827$$pIEEE trans. vis. comput. graph.$$tIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS$$x1077-2626
000088209 8564_ $$s1085585$$uhttps://zaguan.unizar.es/record/88209/files/texto_completo.pdf$$yPostprint
000088209 8564_ $$s482284$$uhttps://zaguan.unizar.es/record/88209/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000088209 909CO $$ooai:zaguan.unizar.es:88209$$particulos$$pdriver
000088209 951__ $$a2020-07-16-09:00:36
000088209 980__ $$aARTICLE