000150337 001__ 150337
000150337 005__ 20250203110925.0
000150337 0247_ $$2doi$$a10.1007/978-3-031-73337-6_10
000150337 0248_ $$2sideral$$a142469
000150337 037__ $$aART-2024-142469
000150337 041__ $$aeng
000150337 100__ $$aMur-Labadia, Lorenzo$$uUniversidad de Zaragoza
000150337 245__ $$aAFF-ttention! Affordances and Attention Models for Short-Term Object Interaction Anticipation
000150337 260__ $$c2024
000150337 5060_ $$aAccess copy available to the general public$$fUnrestricted
000150337 5203_ $$aShort-Term object-interaction Anticipation (STA) consists of detecting the location of the next-active objects, the noun and verb categories of the interaction, and the time to contact from the observation of egocentric video. This ability is fundamental for wearable assistants or human-robot interaction to understand the user’s goals, but there is still room for improvement to perform STA in a precise and reliable way. In this work, we improve the performance of STA predictions with two contributions: 1) We propose STAformer, a novel attention-based architecture integrating frame-guided temporal pooling, dual image-video attention, and multiscale feature fusion to support STA predictions from an image-input video pair; 2) We introduce two novel modules to ground STA predictions on human behavior by modeling affordances. First, we integrate an environment affordance model which acts as a persistent memory of interactions that can take place in a given physical scene. Second, we predict interaction hotspots from the observation of hands and object trajectories, increasing confidence in STA predictions localized around the hotspot. Our results show significant relative Overall Top-5 mAP improvements of up to 
 on Ego4D and  on a novel set of curated EPIC-Kitchens STA labels. We will release the code, annotations, and pre-extracted affordances on Ego4D and EPIC-Kitchens to encourage future research in this area.
000150337 536__ $$9info:eu-repo/grantAgreement/ES/MICINN-AEI/PID2021-125209OB-I00$$9info:eu-repo/grantAgreement/EUR/MICINN/TED2021-129410B-I00
000150337 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000150337 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000150337 700__ $$0(orcid)0000-0002-6741-844X$$aMartinez-Cantin, Ruben$$uUniversidad de Zaragoza
000150337 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, Jose J.$$uUniversidad de Zaragoza
000150337 700__ $$aFarinella, Giovanni Maria
000150337 700__ $$aFurnari, Antonino
000150337 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000150337 773__ $$g2024 (2024), 167-184$$pLect. notes comput. sci.$$tLecture Notes in Computer Science$$x0302-9743
000150337 8564_ $$s9282596$$uhttps://zaguan.unizar.es/record/150337/files/texto_completo.pdf$$yPostprint$$zinfo:eu-repo/date/embargoEnd/2025-10-31
000150337 8564_ $$s1504944$$uhttps://zaguan.unizar.es/record/150337/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint$$zinfo:eu-repo/date/embargoEnd/2025-10-31
000150337 909CO $$ooai:zaguan.unizar.es:150337$$particulos$$pdriver
000150337 951__ $$a2025-02-03-10:49:53
000150337 980__ $$aARTICLE