000150270 001__ 150270 000150270 005__ 20250201165030.0 000150270 0247_ $$2doi$$a10.1109/LRA.2021.3101822 000150270 0248_ $$2sideral$$a126154 000150270 037__ $$aART-2021-126154 000150270 041__ $$aeng 000150270 100__ $$aSabater, A.$$uUniversidad de Zaragoza 000150270 245__ $$aDomain and View-Point Agnostic Hand Action Recognition 000150270 260__ $$c2021 000150270 5060_ $$aAccess copy available to the general public$$fUnrestricted 000150270 5203_ $$aHand action recognition is a special case of action recognition with applications in human-robot interaction, virtual reality or life-logging systems. Building action classifiers able to work for such heterogeneous action domains is very challenging. There are very subtle changes across different actions from a given application but also large variations across domains (e.g. virtual reality vs life-logging). This work introduces a novel skeleton-based hand motion representation model that tackles this problem. The framework we propose is agnostic to the application domain or camera recording view-point. When working on a single domain (intra-domain action classification) our approach performs better or similar to current state-of-the-art methods on well-known hand action recognition benchmarks. And, more importantly, when performing hand action recognition for action domains and camera perspectives which our approach has not been trained for (cross-domain action classification), our proposed framework achieves comparable performance to intra-domain state-of-the-art methods. These experiments show the robustness and generalization capabilities of our framework. 000150270 536__ $$9info:eu-repo/grantAgreement/ES/DGA-FSE/T45-17R$$9info:eu-repo/grantAgreement/ES/MICIU-AEI-FEDER/PGC2018-098817-A-I00 000150270 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/ 000150270 590__ $$a4.321$$b2021 000150270 591__ $$aROBOTICS$$b11 / 30 = 0.367$$c2021$$dQ2$$eT2 000150270 592__ $$a2.206$$b2021 000150270 593__ $$aArtificial Intelligence$$c2021$$dQ1 000150270 593__ $$aBiomedical Engineering$$c2021$$dQ1 000150270 593__ $$aMechanical Engineering$$c2021$$dQ1 000150270 593__ $$aControl and Optimization$$c2021$$dQ1 000150270 593__ $$aControl and Systems Engineering$$c2021$$dQ1 000150270 593__ $$aComputer Vision and Pattern Recognition$$c2021$$dQ1 000150270 594__ $$a8.0$$b2021 000150270 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion 000150270 700__ $$0(orcid)0000-0003-4638-4655$$aAlonso, I.$$uUniversidad de Zaragoza 000150270 700__ $$0(orcid)0000-0003-1183-349X$$aMontesano, L.$$uUniversidad de Zaragoza 000150270 700__ $$0(orcid)0000-0002-7580-9037$$aMurillo, A.C.$$uUniversidad de Zaragoza 000150270 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát. 000150270 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf. 000150270 773__ $$g6, 4 (2021), 7823-7830$$pIEEE Robot. autom. let.$$tIEEE Robotics and Automation Letters$$x2377-3766 000150270 8564_ $$s1995857$$uhttps://zaguan.unizar.es/record/150270/files/texto_completo.pdf$$yPostprint 000150270 8564_ $$s3119397$$uhttps://zaguan.unizar.es/record/150270/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint 000150270 909CO $$ooai:zaguan.unizar.es:150270$$particulos$$pdriver 000150270 951__ $$a2025-02-01-14:36:44 000150270 980__ $$aARTICLE