<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1145/3508361</dc:identifier><dc:language>eng</dc:language><dc:creator>Martin, Daniel</dc:creator><dc:creator>Malpica, Sandra</dc:creator><dc:creator>Gutierrez, Diego</dc:creator><dc:creator>Masia, Belen</dc:creator><dc:creator>Serrano, Ana</dc:creator><dc:title>Multimodality in VR: A survey</dc:title><dc:identifier>ART-2022-129121</dc:identifier><dc:description>Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer.</dc:description><dc:date>2022</dc:date><dc:source>http://zaguan.unizar.es/record/118987</dc:source><dc:doi>10.1145/3508361</dc:doi><dc:identifier>http://zaguan.unizar.es/record/118987</dc:identifier><dc:identifier>oai:zaguan.unizar.es:118987</dc:identifier><dc:relation>info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/765121/EU/DyViTo: Dynamics in Vision and Touch - the look and feel of stuff/DyViTo</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 765121-DyViTo</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME</dc:relation><dc:identifier.citation>ACM Computing Surveys 54, 10s (2022), 216 [36 pp.]</dc:identifier.citation><dc:rights>All rights reserved</dc:rights><dc:rights>http://www.europeana.eu/rights/rr-f/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>