Resumen: Event cameras record sparse illumination changes with high temporal resolution and high dynamic range. Thanks to their sparse recording and low consumption, they are increasingly used in applications such as AR/VR and autonomous driving. Current top-performing methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms, while event-aware methods do not perform as well. We propose Event Transformer++ , that improves our seminal work EvT with a refined patch-based event representation and a more robust backbone to achieve more accurate results, while still benefiting from event-data sparsity to increase its efficiency. Additionally, we show how our system can work with different data modalities and propose specific output heads, for event-stream classification (i.e. action recognition) and per-pixel predictions (dense depth estimation). Evaluation results show better performance to the state-of-the-art while requiring minimal computation resources, both on GPU and CPU. Idioma: Inglés DOI: 10.1109/TPAMI.2023.3311336 Año: 2023 Publicado en: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 45, 12 (2023), 16013-16020 ISSN: 0162-8828 Factor impacto JCR: 20.8 (2023) Categ. JCR: ENGINEERING, ELECTRICAL & ELECTRONIC rank: 3 / 352 = 0.009 (2023) - Q1 - T1 Categ. JCR: COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE rank: 2 / 197 = 0.01 (2023) - Q1 - T1 Factor impacto CITESCORE: 28.4 - Applied Mathematics (Q1) - Software (Q1) - Computer Vision and Pattern Recognition (Q1) - Artificial Intelligence (Q1) - Computational Theory and Mathematics (Q1)