- Research
- Open access
- Published:
The application of problem-based learning (PBL) guided by ChatGPT in clinical education in the Department of Nephrology
BMC Medical Education volume 25, Article number: 1048 (2025)
Abstract
Background
Nephrology, a complex and specialized medical field, has seen significant advancements, yet traditional teaching methods remain outdated and less effective. This study explores the integration of Problem-Based Learning (PBL) guided by ChatGPT in the Department of Nephrology at Guizhou Provincial People’s Hospital to enhance medical education.
Objective
To assess the impact of ChatGPT-guided Problem-Based Learning (PBL) on medical students’ education and satisfaction in the Nephrology Department.
Methods
Fifty-four clinical resident doctors were divided into an experimental group, using ChatGPT-guided PBL, and a control group, using traditional methods. Both groups were assessed through theoretical and clinical practice exams, teaching satisfaction surveys, and self-assessments of teaching effectiveness.
Results
Results indicated the experimental group achieved significantly higher scores in both theoretical and clinical assessments. Additionally, they reported higher satisfaction and effectiveness in learning. Despite spending more time in pre-class preparation, the overall learning time did not increase due to reduced post-class review time facilitated by ChatGPT.
Conclusions
This study demonstrates the potential of ChatGPT-integrated PBL to enhance learning outcomes, satisfaction, and efficiency in medical education. Future research should include larger samples and longer follow-up to validate these findings.
Background
Nephrology is a complex and highly specialized discipline [1]. In recent years, significant advancements have been made in this field, including the discovery of new pathogenesis mechanisms, deeper insights into pathological changes, updated diagnostic criteria, and evolving therapeutic strategies [2]. However, compared to the rapid progress in nephrology diagnosis and treatment, the teaching methods in nephrology have lagged behind, presenting new challenges for nephrology educators [3, 4].
Traditional teaching methods are relatively outdated, being teacher-centered and exam-oriented, which often places students in a passive learning role [4]. In contrast, Problem-Based Learning (PBL) is a student-centered, problem-initiated teaching model [5]. PBL encourages students to solve problems through self-directed learning, research, discussion, and collaboration, thereby fostering their independent learning abilities and comprehensive thinking skills. The PBL approach effectively enhances medical students’ clinical skills, willingness to learn independently, and problem-solving abilities [6]. However, studies have found that the PBL model requires substantial medical resources and demands high levels of self-learning ability and self-discipline from students [7].
With the rapid development of information technology, the widespread application of artificial intelligence (AI) and natural language processing (NLP) technologies is permeating various fields [8,9,10]. Chat Generative Pre-trained Transformer (ChatGPT), driven by a large language model, represents the forefront of OpenAI’s developments [11, 12]. Its impact on clinical medicine and potential role in medical education are garnering increasing attention [13,14,15]. However, previous studies have identified several limitations of ChatGPT, including a lack of real-time clinical knowledge, the potential to generate misleading or incorrect information, and difficulties in understanding complex medical contexts without precise prompts. Integrating ChatGPT with a problem-based teaching model combines this powerful language model with traditional medical education, providing students with an interactive learning experience through AI assistance, thus promoting innovation and enhancement in medical education.
This study aims to apply ChatGPT in medical education, combined with a problem-based teaching model, to achieve personalized teaching content and learning guidance. This approach seeks to stimulate students’ autonomy, initiative, and enthusiasm for learning, thereby promoting deeper learning and critical thinking. This initiative will help cultivate medical professionals with innovative abilities and problem-solving skills.
Research methods
Study participants
We selected 54 clinical resident doctors (27 males and 27 females) undergoing standardized training in the Department of Nephrology at Guizhou Provincial People’s Hospital from December 2023 to May 2024 as study participants. All participants had completed their undergraduate medical education and had not received any prior specialized training in nephrology before this study. Informed consent was obtained from all participants, who were fully informed about the purpose, procedures, potential risks, and benefits of the study, and voluntarily agreed to participate. All procedures involving human participants were conducted in accordance with the Helsinki Declaration and its later amendments or comparable ethical standards. A simple randomization was adopted for this study. All students were renumbered as 1 to N. If the assigned number was odd, he/she entered the experimental group, whereas if the number was even, he/she entered the control group. After a theoretical nephrology exam, participants were randomly divided into an experimental group (28 participants) and a control group (26 participants). The experimental group adopted a problem-based learning (PBL) model guided by ChatGPT, while the control group followed traditional teaching methods.
Study design
Both the control and experimental groups were taught by instructors with 3 to 5 years of clinical teaching experience, who underwent standardized training before the study. Instructors prepared relevant materials (key knowledge lectures and case studies selected from the department, creating courseware that met the syllabus requirements and closely related to the cases) according to the standardized training syllabus for nephrology residents. These materials were provided to the residents in advance for pre-class preparation.
For the control group, residents reviewed the materials provided by the instructor and could independently consult relevant literature and resources. In the experimental group, during the preparation phase, instructors guided residents to use ChatGPT for self-directed learning before each teaching session, engage in simulated dialogues and discussions, and integrate literature search recommendations from ChatGPT’s responses. The dialogue and discussion topics are formulated by the instructors based on the teaching syllabus and, after undergoing instructor validation, are provided to the resident doctors. Resident doctors can make appropriate adjustments based on the actual clinical situation. Residents compiled specific problems and difficulties encountered during their study.
During formal teaching sessions, both groups were presented with class themes and agendas by the instructor, who explained the learning content. The experimental group engaged in discussions under the instructor’s guidance and raised key issues encountered during their study. The instructor then summarized the main points and addressed difficult questions from the discussion.
The total training period for both groups was two months, including eight lecture sessions (once a week) and routine ward training. After completing all knowledge points, both groups completed a survey on their learning experience and provided teaching suggestions. The study design is illustrated in Fig. 1.
Evaluation methods
To evaluate participants’ mastery of course content, assessments were conducted at the end of the 8-week course. The assessments were divided into theoretical knowledge and clinical practice, each with a total score of 100 points. The evaluators for the clinical practice assessment were blinded to the group allocation of the participants to reduce potential scoring bias.
Theoretical knowledge assessment
A standardized nephrology theoretical exam was used, with test questions drawn from the nephrology question bank. The exam covered pathogenesis, clinical manifestations, and treatment principles of major nephrological diseases included in the teaching syllabus. After assembling the test, it underwent expert review and pilot testing to ensure its validity and reliability.
Clinical practice assessment
An evaluation team consisting of two attending physicians and one associate chief physician conducted the clinical practice assessment. The clinical practice assessment included two parts: physical examination and skill operations, each accounting for 50 points, making a total of 100 points: 1) Physical Examination: This was performed on a standard human model, focusing on examinations related to kidney diseases. 2) Skill Operations: This included common basic clinical procedures in nephrology, emphasizing their indications and precautions. The final score for clinical practice was the average score given by the three instructors.
Additionally, a questionnaire survey was conducted to evaluate the training doctors’ teaching satisfaction and self-assessment of teaching effectiveness. 1) Teaching Survey: This included a Teaching Satisfaction Questionnaire and a Teaching Effectiveness Self-Assessment, both designed by the researchers and filled out anonymously by the participants. 2) Teaching Satisfaction Questionnaire: This measured the practicality and comprehensiveness of the course content, the ability of the teachers to answer questions and clarify doubts, the understanding and support of the teachers for the students’ learning needs, the adequacy of learning resources and participation opportunities, the overall quality and effectiveness of the course, and other aspects. It included ten items. 3) Teaching Effectiveness Self-Assessment: This measured communication skills, self-learning ability, and other factors, with a total of five items.
Both questionnaires used a standard Likert scale, ranging from 5 (strongly agree) to 1 (strongly disagree).
Statistical methods
Statistical analysis will be performed using SPSS 26.0 software. Descriptive statistics for continuous variables will be reported as mean ± standard deviation. Group comparisons will be conducted using the t-test. Categorical data will be presented as percentages, and group comparisons will be assessed using the chi-square test. A significance level of P < 0.05 will be used to indicate statistical significance of differences.
Results
In this study, all resident doctors completed the training, assessments, and related questionnaire evaluations.
The experimental group comprised 15 males and 13 females, with an average age of (23.16 ± 0.87) years; the control group included 12 males and 14 females, with an average age of (23.08 ± 0.93) years. There were no statistically significant differences (P > 0.05) in general characteristics such as gender, age, and pre-training theoretical exam scores between the experimental group and the control group participants. This indicates comparability between the two groups (Table 1).
Before the training, both groups underwent a baseline theoretical exam, showing no significant difference in scores between the two groups (P = 0.241). However, after the training, the experimental group demonstrated a significantly higher score of 91.82 ± 2.25 in the theoretical exam compared to 87.69 ± 3.99 in the control group (P < 0.001). Additionally, the magnitude of score improvement was significantly higher in the experimental group (28.21 ± 4.383 vs 22.69 ± 6.455, P < 0.001) (Fig. 2).
Detailed analysis of the test scores for the two groups of study participants reveals that following the training, the experimental group achieved a clinical practice assessment score of 92.25 ± 2.04, which was significantly higher compared to the control group’s score of 88.19 ± 2.23 (P < 0.001). See Table 2 for specific details.
Comparing the teaching survey items between the two groups of trainees, the experimental group received significantly higher evaluations in both teaching satisfaction and self-assessment of teaching effectiveness after the training (45.92 ± 2.37 vs 43.77 ± 1.53; 23.18 ± 1.31 vs 21.31 ± 1.54, P < 0.001) (Table 3).
It can be observed that both in terms of students’ performance improvement and satisfaction with the teaching method, the experimental group scored higher. When comparing the study times of the two groups of research subjects before and after class, it can be seen that although the experimental group spent more time before class (2.21 ± 0.46 vs 1.44 ± 0.48; P < 0.001), the time spent on review after class was reduced relative to the control group (1.25 ± 0.35 vs 1.90 ± 0.42; P < 0.001), and there was no significant difference in the total learning time between the two groups of students (3.46 ± 0.54 vs 3.35 ± 0.60; P = 0.76) (Fig. 3).
Discussion
Medical education represents a critical cornerstone in cultivating future healthcare professionals [4]. However, traditional classroom teaching methods have long been criticized for their limited ability to stimulate learner initiative [16]. With evolving educational philosophies in medicine, various innovative teaching methods aimed at enhancing learner autonomy have emerged, among which Problem-Based Learning (PBL) has gained widespread adoption [17, 18]. However, in practical application, some students have provided feedback that PBL demands excessive extracurricular time and lacks sufficient learning resources, thus increasing student burden and affecting teaching effectiveness [19].
Fortunately, with the rapid development of information technology, artificial intelligence, and natural language processing have been widely applied in the medical field, presenting new opportunities for clinical teaching [20]. ChatGPT, powered by a large-scale language model, is at the forefront of transforming medical education and introducing new possibilities for clinical teaching [21,22,23]. By integrating ChatGPT with a problem-based learning approach, this advanced language model aims to provide personalized and interactive learning experiences, thereby fostering innovation and development in medical education.
In this study, we selected resident physicians undergoing standardized clinical training at Guizhou Provincial People’s Hospital as our subjects, dividing them into experimental and control groups. Assessment results before and after training showed that the experimental group outperformed the control group in both theoretical exams and clinical skills assessments. This may be attributed to the experimental group’s active engagement in learning, facilitating better comprehension and application of knowledge in clinical practice. Additionally, feedback from the experimental group indicated that the introduction of ChatGPT expanded the breadth of reference materials available, enhancing learning satisfaction and self-assessment. This may be related to factors such as personalized feedback, increased student engagement, and reduced cognitive load during post-class review. As a result, the motivation of the trainees significantly improved, reducing the dominant role of teachers in class from being the sole knowledge source to orchestrators and facilitators of teaching.
It is noteworthy that although the experimental group demonstrated significantly higher scores in assessment and teaching satisfaction compared to the control group, the overall learning time did not significantly increase. This may be due to the problem-based learning model increasing the difficulty and duration of pre-class preparation, while the integration of ChatGPT saved time spent searching through textbooks and literature22, thereby reducing the need for review time.
The findings of this study underscore the potential of integrating ChatGPT with a problem-based learning approach in nephrology clinical teaching. This teaching model effectively motivates trainees, emphasizing the importance of pre-class reflection, in-class discussion, and post-class feedback. However, this learning model demands more time and attention from clinical supervisors, potentially diverting their focus from patient care responsibilities. Moreover, the study’s sample size was relatively small and confined to a specific medical field. With the widespread adoption of various chatbot models, studies have already conducted comparative analyses in fields such as urology [24, 25]. Therefore, in future teaching, we plan to introduce multiple chatbot models for group-based instruction to evaluate and differentiate their respective advantages and limitations. Future research should include multicenter, large-sample, and long-term follow-up studies to further validate these findings.
In conclusion, combining ChatGPT with a problem-based learning approach presents new opportunities and challenges for medical education. It not only enhances trainees’ understanding and mastery of clinical knowledge and skills but also increases their interest in clinical learning and satisfaction with teaching. As artificial intelligence continues to advance, we anticipate that this method will play an increasingly significant role in future medical education, contributing significantly to the cultivation of outstanding medical professionals.
Data availability
Data is provided within the manuscript or supplementary information files.
Abbreviations
- PBL:
-
Problem-Based Learning
References
Roberts JK, Sparks MA, Lehrich RW. Medical student attitudes toward kidney physiology and nephrology: a qualitative study. Ren Fail. 2016;38(10):1683–93.
Kidney Disease: Improving Global Outcomes (KDIGO) CKD Work Group. KDIGO 2024 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease. Kidney Int. 2024;105(4S):S117–S314. https://doi.org/10.1016/j.kint.2023.10.018. PMID: 38490803.
Rubin M, Lecker SH, Ramkumar N, Sozio SM, Hoover RS Jr, Zeidel ML, Ko BS. American Society of Nephrology Kidney Tutored Research and Education for Kidney Scholars (TREKS) Program: A 10-Year Interim Analysis. J Am Soc Nephrol. 202423;35(9):1284–91. https://doi.org/10.1681/ASN.0000000000000384. Epub ahead of print. PMID: 38652562; PMCID: PMC11387023.
Medical Education-Progress of Twenty-Two Years. JAMA. 2022;328(7):683. https://doi.org/10.1001/jama.2021.17095. PMID: 35972494.
Yang F, Lin W, Wang Y. Flipped classroom combined with case-based learning is an effective teaching modality in nephrology clerkship. BMC Med Educ. 2021;21(1):276.
Ren S, Li Y, Pu L, Feng Y. Effects of problem-based learning on delivering medical and nursing education: a systematic review and meta-analysis of randomized controlled trials. Worldviews Evid Based Nurs. 2023;20(5):500–12.
Sivarajah RT, Curci NE, Johnson EM, Lam DL, Lee JT, Richardson ML. A review of innovative teaching methods. Acad Radiol. 2019;26(1):101–13.
Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, Madriaga M, Aggabao R, Diaz-Candido G, Maningo J, Tseng V. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198.
Vasey B, Nagendran M, Campbell B, Clifton DA, Collins GS, Denaxas S, Denniston AK, Faes L, Geerts B, Ibrahim M, et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med. 2022;28(5):924–33.
Indran IR, Paranthaman P, Gupta N, Mustafa N. Twelve tips to leverage AI for efficient and effective medical question generation: A guide for educators using Chat GPT. Med Teach. 2024;46(8):1021–6. https://doi.org/10.1080/0142159X.2023.2294703. Epub 2023 Dec 26. PMID: 38146711.
Kim TW. Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review. J Educ Eval Health Prof. 2023;20:38.
Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. 2024;17(5):926–31.
Loftus TJ, Shickel B, Ozrazgat-Baslanti T, Ren Y, Glicksberg BS, Cao J, Singh K, Chan L, Nadkarni GN, Bihorac A. Artificial intelligence-enabled decision support in nephrology. Nat Rev Nephrol. 2022;18(7):452–65.
Wu Y, Zheng Y, Feng B, Yang Y, Kang K, Zhao A. Embracing ChatGPT for medical education: exploring its impact on doctors and medical students. JMIR Med Educ. 2024;10:e52483.
Biswas S. ChatGPT and the future of medical writing. Radiology. 2023;307(2):e223312.
Kaufman DM. Applying educational theory in practice. BMJ. 2003;326(7382):213–6.
Rhodes A, Wilson A, Rozell T. Value of case-based learning within STEM courses: is it the method or is it the student? CBE Life Sci Educ. 2020;19(3):ar44.
Bodagh N, Bloomfield J, Birch P, Ricketts W. Problem-based learning: a review. Br J Hosp Med (Lond). 2017;78(11):C167-c170.
Zheng QM, Li YY, Yin Q, Zhang N, Wang YP, Li GX, Sun ZG. The effectiveness of problem-based learning compared with lecture-based learning in surgical education: a systematic review and meta-analysis. BMC Med Educ. 2023;23(1):546.
Miao J, Thongprayoon C, Garcia Valencia OA, Krisanapan P, Sheikh MS, Davis PW, Mekraksakit P, Suarez MG, Craici IM, Cheungpasitporn W. Performance of ChatGPT on nephrology test questions. Clin J Am Soc Nephrol. 2023;19(1):35–43.
Alqahtani T, Badreldin HA, Alrashed M, Alshaya AI, Alghamdi SS, Bin Saleh K, Alowais SA, Alshaya OA, Rahman I, Al Yami MS, Albekairy AM. The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Res Social Adm Pharm. 2023;19(8):1236–42.
Takagi S, Watari T, Erabi A, Sakaguchi K. Performance of GPT-3.5 and GPT-4 on the Japanese medical licensing examination: comparison study. JMIR Med Educ. 2023;9:e48002.
Kanjee Z, Crowe B, Rodman A. Accuracy of a generative artificial intelligence model in a complex diagnostic challenge. JAMA. 2023;330(1):78–80.
Malak A, Şahin MF. How useful are current chatbots regarding urology patient information? Comparison of the ten most popular chatbots’ responses about female urinary incontinence. J Med Syst. 2024;48(1):102.
Şahin MF, Doğan Ç, Topkaç EC, Şeramet S, Tuncer FB, Yazıcı CM. Which current chatbot is more competent in urological theoretical knowledge? A comparative analysis by the European board of urology in-service assessment. World J Urol. 2025;43(1):116.
Acknowledgements
We thank the Department of Nephrology, Guizhou Provincial People’s Hospital (Guiyang, China) for technical support of this study.
Funding
This research was financially supported by the funds from: The Project of Science and Technology of Guizhou Province [No. Qian Ke He Ji Chu [2024] normal 468]; Science and Technology Foundation of Health and Family Planning Commission of Guizhou Province [gzwkj2023-320]; Key Advantageous Discipline Construction Project of Guizhou Provincial Health Commission in 2025.
Author information
Authors and Affiliations
Contributions
X. and Y. wrote the manuscript. Y.Q. and Y. collected and organized the data. J. revised the manuscript. Y. supervised the project. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
This study (including the experimental protocol and its implementation details) was approved by the Ethics Committee of Guizhou Provincial People’s Hospital (approval number: 2024–015). The survey was administered to medical students at Guizhou Provincial People’s Hospital as part of their course. Informed consent was obtained, and participation was entirely voluntary.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Tong, X., Hu, Y., Long, Y. et al. The application of problem-based learning (PBL) guided by ChatGPT in clinical education in the Department of Nephrology. BMC Med Educ 25, 1048 (2025). https://doi.org/10.1186/s12909-025-07427-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12909-025-07427-w