Resumen: Artificial Intelligence (AI) has become increasingly important in critical domains such as medicine, where accurate and interpretable decision-making is essential. However, many high-performing AI models operate as “black boxes”, limiting transparency and making it difficult for clinicians to understand or verify predictions. To address this challenge, we present an eXplainable Artificial Intelligence (XAI) framework that integrates a fuzzy rule-based classifier with genetic algorithms and 2-tuple linguistic representations. The method incrementally generates general fuzzy rules, introduces fuzzy exception rules to capture atypical cases, and applies rule selection and parameter tuning to enhance both accuracy and interpretability. Experiments on nine medical datasets demonstrate that our approach achieves competitive or superior accuracy compared to state-of-the-art algorithms, while requiring fewer rules. These results show that the method not only improves predictive performance but also provides clear, human-readable explanations for each decision, thereby increasing trust and facilitating its application in medical practice. Idioma: Inglés DOI: 10.1007/s10489-025-07081-1 Año: 2026 Publicado en: APPLIED INTELLIGENCE 56, 3 (2026), 77 [23 pp.] ISSN: 0924-669X Financiación: info:eu-repo/grantAgreement/ES/MICIU/PID2022-139297OB-I00 Tipo y forma: Article (Published version) Área (Departamento): Área Lenguajes y Sistemas Inf. (Dpto. Informát.Ingenie.Sistms.)