000109401 001__ 109401
000109401 005__ 20230519145454.0
000109401 0247_ $$2doi$$a10.9781/ijimai.2021.10.005
000109401 0248_ $$2sideral$$a125362
000109401 037__ $$aART-2021-125362
000109401 041__ $$aeng
000109401 100__ $$0(orcid)0000-0002-0235-2267$$aHernández Oliván, C.$$uUniversidad de Zaragoza
000109401 245__ $$aMusic boundary detection using convolutional neural networks: a comparative analysis of combined input features
000109401 260__ $$c2021
000109401 5060_ $$aAccess copy available to the general public$$fUnrestricted
000109401 5203_ $$aThe analysis of the structure of musical pieces is a task that remains a challenge for Artificial Intelligence, especially in the field of Deep Learning. It requires prior identification of the structural boundaries of the music pieces, whose structural boundary analysis has recently been studied with unsupervised methods and supervised neural networks trained with human annotations. The supervised neural networks that have been used in previous studies are Convolutional Neural Networks (CNN) that use Mel-Scaled Log-magnitude Spectograms features (MLS), Self-Similarity Matrices (SSM) or Self-Similarity Lag Matrices (SSLM) as inputs. In previously published studies, pre-processing is done in different ways using different distance metrics, and different audio features are used for computing the inputs, so a generalised pre-processing method for calculating model inputs is missing. The objective of this work is to establish a general method to pre-process these inputs by comparing the results obtained by taking the inputs calculated from different pooling strategies, distance metrics and audio characteristics, also taking into account the computing time to obtain them. We also establish the most effective combination of inputs to be delivered to the CNN to provide the most efficient way to extract the boundaries of the structure of the music pieces. With an adequate combination of input matrices and pooling strategies, we obtain an accuracy F1 of 0.411 that outperforms a current work done under the same conditions (same public available dataset for training and testing).
000109401 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttp://creativecommons.org/licenses/by/3.0/es/
000109401 590__ $$a4.936$$b2021
000109401 592__ $$a0.0$$b2021
000109401 594__ $$a0.6$$b2021
000109401 591__ $$aCOMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS$$b36 / 112 = 0.321$$c2021$$dQ2$$eT1
000109401 593__ $$aArtificial Intelligence$$c2021
000109401 591__ $$aCOMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE$$b48 / 146 = 0.329$$c2021$$dQ2$$eT1
000109401 593__ $$aComputer Networks and Communications$$c2021
000109401 593__ $$aStatistics and Probability$$c2021
000109401 593__ $$aSignal Processing$$c2021
000109401 593__ $$aComputer Vision and Pattern Recognition$$c2021
000109401 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000109401 700__ $$0(orcid)0000-0002-7500-4650$$aBeltrán, J.R.$$uUniversidad de Zaragoza
000109401 700__ $$0(orcid)0000-0002-1041-0498$$aDiaz-Guerra, D.$$uUniversidad de Zaragoza
000109401 7102_ $$15008$$2785$$aUniversidad de Zaragoza$$bDpto. Ingeniería Electrón.Com.$$cÁrea Tecnología Electrónica
000109401 773__ $$g7, 2 (2021), 78-88$$pInt. j. interact. multimed. artif. intell.$$tInternational journal of interactive multimedia and artificial intelligence$$x1989-1660
000109401 8564_ $$s1258586$$uhttps://zaguan.unizar.es/record/109401/files/texto_completo.pdf$$yVersión publicada
000109401 8564_ $$s2939182$$uhttps://zaguan.unizar.es/record/109401/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000109401 909CO $$ooai:zaguan.unizar.es:109401$$particulos$$pdriver
000109401 951__ $$a2023-05-18-14:49:11
000109401 980__ $$aARTICLE