000163824 001__ 163824
000163824 005__ 20251107115328.0
000163824 0247_ $$2doi$$a10.1186/s12909-025-08006-9
000163824 0248_ $$2sideral$$a145982
000163824 037__ $$aART-2025-145982
000163824 041__ $$aeng
000163824 100__ $$aFaferek, Joanna
000163824 245__ $$aApplying ChatGPT to plan and create a realistic collection of virtual patients for clinical reasoning training
000163824 260__ $$c2025
000163824 5060_ $$aAccess copy available to the general public$$fUnrestricted
000163824 5203_ $$aBackground
Virtual patients (VPs) are useful tools in training of medical students’ clinical reasoning abilities. However, creating high-quality and peer-reviewed VPs is time-consuming and resource-intensive. Therefore, the aim of this study was to investigate whether generative artificial intelligence (AI) could facilitate the planning and creation of a diverse collection of VPs suitable for training medical students in clinical reasoning.

Methods
We used ChatGPT to generate a blueprint for 200 diverse VPs that adequately represent the population in Europe. We selected five VPs from the blueprint to be created by humans and ChatGPT. We assessed the generated blueprint for representativeness and internal consistency, and we reviewed the VPs in a multi-step, partly blinded process for didactical quality and content accuracy. Finally, we received 44 VP evaluations from medical students.

Results
The generated blueprint did not meet our expectations in terms of quality or representativeness and showed repetitive patterns and an unusually high number of atypical VP outlines.

The ChatGPT- and human-generated VPs were comparable in terms of didactic quality and medical accuracy. Neither contained any medically incorrect information and reviewers and students could not discern significant differences. However, the five human-created VPs demonstrated a greater variety in storytelling, differential diagnosis, and patient-doctor interaction. The ChatGPT-generated VPs also included AI-generated patient images; however, we could not generate realistic clinical images.

Conclusions
While we do not consider ChatGPT in its current version capable of generating a realistic blueprint for a VP collection, we believe that the process of prompting, combined with iterative discussions and refinements after each step, is promising and warrants further exploration. Similarly, although ChatGPT-generated VPs can serve as a good starting point, the variety of VP scenarios in a large collection may be limited without interactions between authors and reviewers to further refine it.
000163824 540__ $$9info:eu-repo/semantics/openAccess$$aby-nc-nd$$uhttps://creativecommons.org/licenses/by-nc-nd/4.0/deed.es
000163824 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000163824 700__ $$aKononowicz, Andrzej A.
000163824 700__ $$aBogutska, Nataliia
000163824 700__ $$aDa Silva Domingues, Vital
000163824 700__ $$aDavydova, Nataliia
000163824 700__ $$aFrankowska, Ada
000163824 700__ $$0(orcid)0000-0003-4242-5464$$aIguacel, Isabel$$uUniversidad de Zaragoza
000163824 700__ $$aMayer, Anja
000163824 700__ $$aMorin, Luc
000163824 700__ $$aPavlyukovich, Nataliia
000163824 700__ $$aPopova, Iryna
000163824 700__ $$aShchudrova, Tetiana
000163824 700__ $$aSudacka, Malgorzata
000163824 700__ $$aSzydlak, Renata
000163824 700__ $$aHege, Inga
000163824 7102_ $$11006$$2255$$aUniversidad de Zaragoza$$bDpto. Fisiatría y Enfermería$$cÁrea Enfermería
000163824 773__ $$g25, 1277 (2025), [13 pp.]$$pBMC MEDICAL EDUCATION$$tBMC MEDICAL EDUCATION$$x1472-6920
000163824 8564_ $$s1736637$$uhttps://zaguan.unizar.es/record/163824/files/texto_completo.pdf$$yVersión publicada
000163824 8564_ $$s2179104$$uhttps://zaguan.unizar.es/record/163824/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000163824 909CO $$ooai:zaguan.unizar.es:163824$$particulos$$pdriver
000163824 951__ $$a2025-11-07-10:25:26
000163824 980__ $$aARTICLE