000169916 001__ 169916
000169916 005__ 20260316125348.0
000169916 0247_ $$2doi$$a10.1109/TPAMI.2026.3652831
000169916 0248_ $$2sideral$$a148448
000169916 037__ $$aART-2026-148448
000169916 041__ $$aeng
000169916 100__ $$aMur-Labadia, Lorenzo$$uUniversidad de Zaragoza
000169916 245__ $$aIntegrating Affordances and Attention models for Short-Term Object Interaction Anticipation
000169916 260__ $$c2026
000169916 5060_ $$aAccess copy available to the general public$$fUnrestricted
000169916 5203_ $$aShort-Term object-interaction Anticipation (STA) consists in detecting the location of the next-active objects, the noun and verb categories of the interaction, as well as the time to contact from the observation of egocentric video. This ability is fundamental for wearable assistants to understand user's goals and provide timely assistance, or to enable human-robot interaction. In this work, we present a method to improve the performance of STA predictions. Our contributions are two-fold: 1) We propose STAformer and STAformer++, two novel attention-based architectures integrating frame-guided temporal pooling, dual image-video attention, and multiscale feature fusion to support STA predictions from an image-input video pair; 2) We introduce two novel modules to ground STA predictions on human behavior by modeling affordances. First, we integrate an environment affordance model which acts as a persistent memory of interactions that can take place in a given physical scene. We explore how to integrate environment affordances via simple late fusion and with an approach which adaptively learns how to best fuse affordances with end-to-end predictions. Second, we predict interaction hotspots from the observation of hands and object trajectories, increasing confidence in STA predictions localized around the hotspot. Our results show significant improvements on Overall Top-5 mAP, with gain up to +23% on Ego4D and +31% on a novel set of curated EPIC-Kitchens STA labels. We released the code, annotations, and pre-extracted affordances on Ego4D and EPIC-Kitchens to encourage future research in this area.
000169916 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000169916 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000169916 700__ $$0(orcid)0000-0002-6741-844X$$aMartinez-Cantin, Ruben$$uUniversidad de Zaragoza
000169916 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, Jose J.$$uUniversidad de Zaragoza
000169916 700__ $$aFarinella, Giovanni Maria
000169916 700__ $$aFurnari, Antonino
000169916 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000169916 773__ $$g(2026), [17 pp.]$$pIEEE trans. pattern anal. mach. intell.$$tIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE$$x0162-8828
000169916 8564_ $$s2370573$$uhttps://zaguan.unizar.es/record/169916/files/texto_completo.pdf$$yPostprint
000169916 8564_ $$s3279813$$uhttps://zaguan.unizar.es/record/169916/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000169916 909CO $$ooai:zaguan.unizar.es:169916$$particulos$$pdriver
000169916 951__ $$a2026-03-16-08:29:38
000169916 980__ $$aARTICLE