000150500 001__ 150500
000150500 005__ 20260112133302.0
000150500 0247_ $$2doi$$a10.1109/TVCG.2024.3456185
000150500 0248_ $$2sideral$$a142628
000150500 037__ $$aART-2024-142628
000150500 041__ $$aeng
000150500 100__ $$aPeng, Xi
000150500 245__ $$aMeasuring and Predicting Multisensory Reaction Latency: A Probabilistic Model for Visual-Auditory Integration
000150500 260__ $$c2024
000150500 5060_ $$aAccess copy available to the general public$$fUnrestricted
000150500 5203_ $$aVirtual/augmented reality (VR/AR) devices offer both immersive imagery and sound. With those wide-field cues, we can simultaneously acquire and process visual and auditory signals to quickly identify objects, make decisions, and take action. While vision often takes precedence in perception, our visual sensitivity degrades in the periphery. In contrast, auditory sensitivity can exhibit an opposite trend due to the elevated interaural time difference. What occurs when these senses are simultaneously integrated, as is common in VR applications such as 360° video watching and immersive gaming? We present a computational and probabilistic model to predict VR users' reaction latency to visual-auditory multisensory targets. To this aim, we first conducted a psychophysical experiment in VR to measure the reaction latency by tracking the onset of eye movements. Experiments with numerical metrics and user studies with naturalistic scenarios showcase the model's accuracy and generalizability. Lastly, we discuss the potential applications, such as measuring the sufficiency of target appearance duration in immersive video playback, and suggesting the optimal spatial layouts for AR interface design.
000150500 536__ $$9info:eu-repo/grantAgreement/ES/AEI/PID2022-141539NB-I00
000150500 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttps://creativecommons.org/licenses/by/4.0/deed.es
000150500 590__ $$a6.5$$b2024
000150500 592__ $$a1.059$$b2024
000150500 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b8 / 128 = 0.062$$c2024$$dQ1$$eT1
000150500 593__ $$aComputer Graphics and Computer-Aided Design$$c2024$$dQ1
000150500 593__ $$aSoftware$$c2024$$dQ1
000150500 593__ $$aSignal Processing$$c2024$$dQ1
000150500 593__ $$aComputer Vision and Pattern Recognition$$c2024$$dQ1
000150500 594__ $$a10.2$$b2024
000150500 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000150500 700__ $$aZhang, Yunxiang
000150500 700__ $$aJiménez-Navarro, Daniel
000150500 700__ $$0(orcid)0000-0002-7796-3177$$aSerrano, Ana$$uUniversidad de Zaragoza
000150500 700__ $$aMyszkowski, Karol
000150500 700__ $$aSun, Qi
000150500 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000150500 773__ $$g30, 11 (2024), 7364-7374$$pIEEE trans. vis. comput. graph.$$tIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS$$x1077-2626
000150500 8564_ $$s17947286$$uhttps://zaguan.unizar.es/record/150500/files/texto_completo.pdf$$yPostprint
000150500 8564_ $$s3205945$$uhttps://zaguan.unizar.es/record/150500/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000150500 909CO $$ooai:zaguan.unizar.es:150500$$particulos$$pdriver
000150500 951__ $$a2026-01-12-12:58:59
000150500 980__ $$aARTICLE