000145342 001__ 145342
000145342 005__ 20241125101203.0
000145342 0247_ $$2doi$$a10.3390/s23229225
000145342 0248_ $$2sideral$$a140125
000145342 037__ $$aART-2023-140125
000145342 041__ $$aeng
000145342 100__ $$0(orcid)0000-0002-5844-7871$$aZarzà, I. de
000145342 245__ $$aLLM Multimodal Traffic Accident Forecasting
000145342 260__ $$c2023
000145342 5060_ $$aAccess copy available to the general public$$fUnrestricted
000145342 5203_ $$aWith the rise in traffic congestion in urban centers, predicting accidents has become paramount for city planning and public safety. This work comprehensively studied the efficacy of modern deep learning (DL) methods in forecasting traffic accidents and enhancing Level-4 and Level-5 (L-4 and L-5) driving assistants with actionable visual and language cues. Using a rich dataset detailing accident occurrences, we juxtaposed the Transformer model against traditional time series models like ARIMA and the more recent Prophet model. Additionally, through detailed analysis, we delved deep into feature importance using principal component analysis (PCA) loadings, uncovering key factors contributing to accidents. We introduce the idea of using real-time interventions with large language models (LLMs) in autonomous driving with the use of lightweight compact LLMs like LLaMA-2 and Zephyr-7b-α. Our exploration extends to the realm of multimodality, through the use of Large Language-and-Vision Assistant (LLaVA)—a bridge between visual and linguistic cues by means of a Visual Language Model (VLM)—in conjunction with deep probabilistic reasoning, enhancing the real-time responsiveness of autonomous driving systems. In this study, we elucidate the advantages of employing large multimodal models within DL and deep probabilistic programming for enhancing the performance and usability of time series forecasting and feature weight importance, particularly in a self-driving scenario. This work paves the way for safer, smarter cities, underpinned by data-driven decision making.
000145342 536__ $$9info:eu-repo/grantAgreement/ES/MCIN/AEI/PID2021-122580NB-I00
000145342 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttp://creativecommons.org/licenses/by/3.0/es/
000145342 590__ $$a3.4$$b2023
000145342 591__ $$aCHEMISTRY, ANALYTICAL$$b34 / 106 = 0.321$$c2023$$dQ2$$eT1
000145342 591__ $$aINSTRUMENTS & INSTRUMENTATION$$b24 / 76 = 0.316$$c2023$$dQ2$$eT1
000145342 591__ $$aENGINEERING, ELECTRICAL & ELECTRONIC$$b122 / 353 = 0.346$$c2023$$dQ2$$eT2
000145342 592__ $$a0.786$$b2023
000145342 593__ $$aInstrumentation$$c2023$$dQ1
000145342 593__ $$aAnalytical Chemistry$$c2023$$dQ1
000145342 593__ $$aAtomic and Molecular Physics, and Optics$$c2023$$dQ1
000145342 593__ $$aInformation Systems$$c2023$$dQ2
000145342 593__ $$aMedicine (miscellaneous)$$c2023$$dQ2
000145342 593__ $$aBiochemistry$$c2023$$dQ2
000145342 593__ $$aElectrical and Electronic Engineering$$c2023$$dQ2
000145342 594__ $$a7.3$$b2023
000145342 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000145342 700__ $$ade Curtò, J.
000145342 700__ $$aRoig, Gemma
000145342 700__ $$aCalafate, Carlos T.
000145342 773__ $$g23, 22 (2023), 9225 [27 pp.]$$pSensors$$tSensors$$x1424-8220
000145342 8564_ $$s4386951$$uhttps://zaguan.unizar.es/record/145342/files/texto_completo.pdf$$yVersión publicada
000145342 8564_ $$s2661061$$uhttps://zaguan.unizar.es/record/145342/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000145342 909CO $$ooai:zaguan.unizar.es:145342$$particulos$$pdriver
000145342 951__ $$a2024-11-22-12:12:45
000145342 980__ $$aARTICLE