<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1109/MCG.2021.3064688</dc:identifier><dc:language>eng</dc:language><dc:creator>Masia Corcoy, B.</dc:creator><dc:creator>Camon J.</dc:creator><dc:creator>Gutierrez Pérez,  D.</dc:creator><dc:creator>Serrano, Ana</dc:creator><dc:title>Influence of Directional Sound Cues on Users'' Exploration across 360° Movie Cuts</dc:title><dc:identifier>ART-2021-126884</dc:identifier><dc:description>Virtual reality (VR) is a powerful medium for 360° 360 storytelling, yet content creators are still in the process of developing cinematographic rules for effectively communicating stories in VR. Traditional cinematography has relied for over a century on well-established techniques for editing, and one of the most recurrent resources for this are cinematic cuts that allow content creators to seamlessly transition between scenes. One fundamental assumption of these techniques is that the content creator can control the camera; however, this assumption breaks in VR: Users are free to explore 360° 360 around them. Recent works have studied the effectiveness of different cuts in 360° 360 content, but the effect of directional sound cues while experiencing these cuts has been less explored. In this work, we provide the first systematic analysis of the influence of directional sound cues in users'' behavior across 360° 360 movie cuts, providing insights that can have an impact on deriving conventions for VR storytelling. © 1981-2012 IEEE.</dc:description><dc:date>2021</dc:date><dc:source>http://zaguan.unizar.es/record/127065</dc:source><dc:doi>10.1109/MCG.2021.3064688</dc:doi><dc:identifier>http://zaguan.unizar.es/record/127065</dc:identifier><dc:identifier>oai:zaguan.unizar.es:127065</dc:identifier><dc:relation>info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/765121/EU/DyViTo: Dynamics in Vision and Touch - the look and feel of stuff/DyViTo</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 765121-DyViTo</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/PID2019-105004GB-I00</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/TIN2016-78753-P</dc:relation><dc:identifier.citation>IEEE COMPUTER GRAPHICS AND APPLICATIONS 41, 4 (2021), 64-75</dc:identifier.citation><dc:rights>All rights reserved</dc:rights><dc:rights>http://www.europeana.eu/rights/rr-f/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>