Explainable artificial intelligence toward usable and trustworthy computer-aided diagnosis of multiple sclerosis from Optical Coherence Tomography

Hernandez, Monica (Universidad de Zaragoza) ; Ramon-Julvez, Ubaldo (Universidad de Zaragoza) ; Vilades, Elisa (Universidad de Zaragoza) ; Cordon, Beatriz (Universidad de Zaragoza) ; Mayordomo, Elvira (Universidad de Zaragoza) ; Garcia-Martin, Elena (Universidad de Zaragoza)
Explainable artificial intelligence toward usable and trustworthy computer-aided diagnosis of multiple sclerosis from Optical Coherence Tomography
Resumen: Background: Several studies indicate that the anterior visual pathway provides information about the dynamics of axonal degeneration in Multiple Sclerosis (MS). Current research in the field is focused on the quest for the most discriminative features among patients and controls and the development of machine learning models that yield computer-aided solutions widely usable in clinical practice. However, most studies are conducted with small samples and the models are used as black boxes. Clinicians should not trust machine learning decisions unless they come with comprehensive and easily understandable explanations. Materials and methods: A total of 216 eyes from 111 healthy controls and 100 eyes from 59 patients with relapsing-remitting MS were enrolled. The feature set was obtained from the thickness of the ganglion cell layer (GCL) and the retinal nerve fiber layer (RNFL). Measurements were acquired by the novel Posterior Pole protocol from Spectralis Optical Coherence Tomography (OCT) device. We compared two black-box methods (gradient boosting and random forests) with a glass-box method (explainable boosting machine). Explainability was studied using SHAP for the black-box methods and the scores of the glass-box method. Results: The best-performing models were obtained for the GCL layer. Explainability pointed out to the temporal location of the GCL layer that is usually broken or thinning in MS and the relationship between low thickness values and high probability of MS, which is coherent with clinical knowledge.Conclusions: The insights on how to use explainability shown in this work represent a first important step toward a trustworthy computer-aided solution for the diagnosis of MS with OCT.
Idioma: Inglés
DOI: 10.1371/journal.pone.0289495
Año: 2023
Publicado en: PLoS ONE 18, 8 (2023), e0289495 [32 pp]
ISSN: 1932-6203

Factor impacto JCR: 2.9 (2023)
Categ. JCR: MULTIDISCIPLINARY SCIENCES rank: 31 / 134 = 0.231 (2023) - Q1 - T1
Factor impacto CITESCORE: 6.2 - Multidisciplinary (Q1)

Factor impacto SCIMAGO: 0.839 - Multidisciplinary (Q1)

Financiación: info:eu-repo/grantAgreement/ES/DGA/T64-20R
Financiación: info:eu-repo/grantAgreement/ES/ISCIII/PI17-01726
Financiación: info:eu-repo/grantAgreement/ES/ISCIII/PI20-00437
Financiación: info:eu-repo/grantAgreement/ES/ISCIII-RICORDS/RD21-0002-0050
Financiación: info:eu-repo/grantAgreement/ES/MICINN/PID2019-104358RB-I00
Financiación: info:eu-repo/grantAgreement/ES/MICINN/PID2022-138703OB-I00
Tipo y forma: Article (Published version)
Área (Departamento): Área Oftalmología (Dpto. Cirugía)
Área (Departamento): Área Lenguajes y Sistemas Inf. (Dpto. Informát.Ingenie.Sistms.)

Exportado de SIDERAL (2024-07-31-09:54:29)


Visitas y descargas

Este artículo se encuentra en las siguientes colecciones:
articulos



 Notice créée le 2023-09-04, modifiée le 2024-07-31


Versión publicada:
 PDF
Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)