<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1109/CVPRW56347.2022.00301</dc:identifier><dc:language>eng</dc:language><dc:creator>Sabater, Alberto</dc:creator><dc:creator>Montesano, Luis</dc:creator><dc:creator>Murillo, Ana C.</dc:creator><dc:title>Event Transformer. A sparse-aware solution for efficient event data processing</dc:title><dc:identifier>ART-2022-130997</dc:identifier><dc:description>Event cameras are sensors of great interest for many applications that run in low-resource and challenging environments. They log sparse illumination changes with high temporal resolution and high dynamic range, while they present minimal power consumption. However, top-performing methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms. Efforts toward efficient solutions usually do not achieve top-accuracy results for complex tasks. This work proposes a novel framework, Event Transformer (EvT) 1 , that effectively takes advantage of event-data properties to be highly efficient and accurate. We introduce a new patch-based event representation and a compact transformer-like architecture to process it. EvT is evaluated on different event-based benchmarks for action and gesture recognition. Evaluation results show better or comparable accuracy to the state-of-the-art while requiring significantly less computation resources, which makes EvT able to work with minimal latency both on GPU and CPU.</dc:description><dc:date>2022</dc:date><dc:source>http://zaguan.unizar.es/record/150271</dc:source><dc:doi>10.1109/CVPRW56347.2022.00301</dc:doi><dc:identifier>http://zaguan.unizar.es/record/150271</dc:identifier><dc:identifier>oai:zaguan.unizar.es:150271</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/DGA-FSE/T45-17R</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MICIU-AEI-FEDER/PGC2018-098817-A-I00</dc:relation><dc:identifier.citation>IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2022 (2022), 2676-2685</dc:identifier.citation><dc:rights>All rights reserved</dc:rights><dc:rights>http://www.europeana.eu/rights/rr-f/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>