000165170 001__ 165170
000165170 005__ 20251219174251.0
000165170 0247_ $$2doi$$a10.1016/j.neucom.2025.132208
000165170 0248_ $$2sideral$$a146719
000165170 037__ $$aART-2026-146719
000165170 041__ $$aeng
000165170 100__ $$0(orcid)0000-0002-0235-2267$$aHernandez-Olivan, Carlos
000165170 245__ $$aSymbolic music structure analysis with graph representations and changepoint detection methods
000165170 260__ $$c2026
000165170 5060_ $$aAccess copy available to the general public$$fUnrestricted
000165170 5203_ $$aMusic Structure Analysis (MSA), particularly symbolic music boundary detection, is crucial for understanding and creating music, yet segmenting music structure at various hierarchical levels remains an open challenge. In this work, we propose three methods for symbolic music boundary detection: Norm, an adapted feature-based approach, and two novel graph-based algorithms, G-PELT and G-Window. Our graph representations offer a powerful way to encode symbolic music, enabling effective structure analysis without explicit feature extraction. We conducted an extensive ablation study using three public datasets, Schubert Winterreise (SWD), Beethoven Piano Sonatas (BPS) and Essen Folk Dataset, which feature diverse musical forms and instrumentation. This allowed us to compare the methods, optimize their parameters for different music styles, and evaluate performance across low, mid, and high structural levels. Our findings demonstrate that our graph-based approaches are highly effective; for instance, the online and unsupervised G-PELT method achieved an F1-score of 0.5640 with a 1-bar tolerance on the SWD dataset. We further illustrate how algorithm parameters can be adjusted to target specific structural granularities. To promote reproducibility and usability, we have integrated the best-performing methods and their optimal parameters for each structural level into musicaiz, an open-source Python package. We anticipate these methods will benefit various Music Information Retrieval (MIR) tasks, including structure-aware music generation, classification, and key change detection.
000165170 536__ $$9info:eu-repo/grantAgreement/ES/DGA-FEDER/T60-20R-AFFECTIVE LAB$$9info:eu-repo/grantAgreement/ES/MICINN/RTI2018-096986-B-C31
000165170 540__ $$9info:eu-repo/semantics/openAccess$$aby-nc-nd$$uhttps://creativecommons.org/licenses/by-nc-nd/4.0/deed.es
000165170 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000165170 700__ $$0(orcid)0000-0002-7500-4650$$aBeltran, Jose R.$$uUniversidad de Zaragoza
000165170 7102_ $$15008$$2785$$aUniversidad de Zaragoza$$bDpto. Ingeniería Electrón.Com.$$cÁrea Tecnología Electrónica
000165170 773__ $$g666 (2026), 132208 [13 pp.]$$pNeurocomputing$$tNeurocomputing$$x0925-2312
000165170 8564_ $$s5137093$$uhttps://zaguan.unizar.es/record/165170/files/texto_completo.pdf$$yVersión publicada
000165170 8564_ $$s2525334$$uhttps://zaguan.unizar.es/record/165170/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000165170 909CO $$ooai:zaguan.unizar.es:165170$$particulos$$pdriver
000165170 951__ $$a2025-12-19-14:42:17
000165170 980__ $$aARTICLE