000126824 001__ 126824
000126824 005__ 20240731103344.0
000126824 0247_ $$2doi$$a10.1007/s00779-023-01721-4
000126824 0248_ $$2sideral$$a134276
000126824 037__ $$aART-2023-134276
000126824 041__ $$aeng
000126824 100__ $$aOspitia-Medina, Yesid
000126824 245__ $$aENSA dataset: a dataset of songs by non-superstar artists tested with an emotional analysis based on time-series
000126824 260__ $$c2023
000126824 5060_ $$aAccess copy available to the general public$$fUnrestricted
000126824 5203_ $$aThis paper presents a novel dataset of songs by non-superstar artists in which a set of musical data is collected, identifying for each song its musical structure, and the emotional perception of the artist through a categorical emotional labeling process. The generation of this preliminary dataset is motivated by the existence of biases that have been detected in the analysis of the most used datasets in the field of emotion-based music recommendation. This new dataset contains 234 min of audio and 60 complete and labeled songs. In addition, an emotional analysis is carried out based on the representation of dynamic emotional perception through a time-series approach, in which the similarity values generated by the dynamic time warping (DTW) algorithm are analyzed and then used to implement a clustering process with the K-means algorithm. In the same way, clustering is also implemented with a Uniform Manifold Approximation and Projection (UMAP) technique, which is a manifold learning and dimension reduction algorithm. The algorithm HDBSCAN is applied for determining the optimal number of clusters. The results obtained from the different clustering strategies are compared and, in a preliminary analysis, a significant consistency is found between them. With the findings and experimental results obtained, a discussion is presented highlighting the importance of working with complete songs, preferably with a well-defined musical structure, considering the emotional variation that characterizes a song during the listening experience, in which the intensity of the emotion usually changes between verse, bridge, and chorus.
000126824 536__ $$9info:eu-repo/grantAgreement/ES/DGA/T60-23R$$9info:eu-repo/grantAgreement/ES/MCIU-AEI-FEDER/RTI2018-096986-B-C31$$9info:eu-repo/grantAgreement/EUR/MINECO/TED2021-130374B-C22
000126824 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000126824 592__ $$a0.654$$b2023
000126824 593__ $$aLibrary and Information Sciences$$c2023$$dQ1
000126824 593__ $$aManagement Science and Operations Research$$c2023$$dQ2
000126824 593__ $$aComputer Science Applications$$c2023$$dQ2
000126824 593__ $$aHardware and Architecture$$c2023$$dQ2
000126824 594__ $$a6.6$$b2023
000126824 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000126824 700__ $$0(orcid)0000-0002-7500-4650$$aBeltrán, José Ramón$$uUniversidad de Zaragoza
000126824 700__ $$0(orcid)0000-0002-9315-6391$$aBaldassarri, Sandra$$uUniversidad de Zaragoza
000126824 7102_ $$15008$$2785$$aUniversidad de Zaragoza$$bDpto. Ingeniería Electrón.Com.$$cÁrea Tecnología Electrónica
000126824 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000126824 773__ $$g27 (2023), 1909-192$$pPersonal and Ubiquitous Computing$$tPersonal and Ubiquitous Computing$$x1617-4909
000126824 8564_ $$s1352676$$uhttps://zaguan.unizar.es/record/126824/files/texto_completo.pdf$$yVersión publicada
000126824 8564_ $$s2344969$$uhttps://zaguan.unizar.es/record/126824/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000126824 909CO $$ooai:zaguan.unizar.es:126824$$particulos$$pdriver
000126824 951__ $$a2024-07-31-09:51:22
000126824 980__ $$aARTICLE