From Artificial Intelligence to Real-Life Practice: Can ChatGPT be a Guide About Pediatric Dysphagia?

Authors

  • Esra Ülgen Kıratlıoğlu Ankara Bilkent City Hospital, Department of Physical Medicine and Rehabilitation, Ankara, Turkey
  • Emre Adıgüzel Ankara Bilkent City Hospital, Department of Physical Medicine and Rehabilitation, Ankara, Turkey

DOI:

https://doi.org/10.6000/1929-4247.2026.15.01.4

Keywords:

ChatGPT, pediatric dysphagia, reliability, safety, usefulness

Abstract

Introduction: The widespread use of Artificial Intelligence (AI)-based tools has significantly simplified access to medical information. Pediatric dysphagia, or difficulty swallowing in children, is among the commonly queried topics because of its close relationship with feeding safety, nutritional intake, and growth outcomes. This study aims to evaluate the reliability, usefulness, and safety of responses generated by Chat Generative Pre-Trained Transformer (GPT) regarding pediatric dysphagia.

Methods: A set of thirty carefully selected questions covering various aspects of pediatric dysphagia, including general information, risk factors, diagnosis, treatment, and follow-up, was prepared based on clinical data, digital trends, and frequently asked questions from health websites. These questions were submitted to ChatGPT (version 4.0), and the responses were independently evaluated by two experts using a 4-point Likert-type scale (1: lowest, 4: highest) to assess reliability, usefulness, and safety. Additionally, the readability of each response was measured using the Flesch-Kincaid Grade Level test, which estimates the educational level required to comprehend the text. The implications of this readability score for caregiver comprehension were also interpreted.

Results: ChatGPT’s responses received high scores overall, with average ratings of 3.73 for reliability, 3.87 for safety, and 3.87 for usefulness. The average Flesch-Kincaid Grade Level was 13.03, indicating suitability for university-level readers. This suggests that while the responses are accurate and informative, their linguistic complexity may limit accessibility for some caregivers.

Conclusion: ChatGPT shows promise as a supportive tool in providing basic information about pediatric dysphagia. However, to ensure accurate and personalized medical evaluation, these AI-generated responses must be verified through professional clinical review. Given that pediatric dysphagia directly affects nutritional intake, growth, and feeding safety, validated AI-based guidance tools could help caregivers recognize feeding problems earlier and seek appropriate medical care promptly.

References

Herbold S, Hautli-Janisz A, Heuer U, Kikteva Z, Trautsch A. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep [Internet] 2023 Oct 30 [cited 2025 Apr 6]; 13(1): 1-11. DOI: https://doi.org/10.1038/s41598-023-45644-9

Xue VW, Lei P, Cho WC, Cho WWC. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med [Internet] 2023 Mar [cited 2025 Apr 7]; 13(3): e1216. DOI: https://doi.org/10.1002/ctm2.1216

Lawlor CM, Choi S. Diagnosis and management of pediatric dysphagia: a review. JAMA Otolaryngol Head Neck Surg [Internet] 2020 Feb 1 [cited 2025 Apr 6]; 146(2): 183-91. DOI: https://doi.org/10.1001/jamaoto.2019.3622

Raol N, Schrepfer T, Hartnick C. Aspiration and dysphagia in the neonatal patient. Clin Perinatol [Internet] 2018 Dec 1 [cited 2025 Apr 6]; 45(4): 645-60. DOI: https://doi.org/10.1016/j.clp.2018.07.005

Kakodkar K, Schroeder JW. Pediatric dysphagia. Pediatr Clin North Am [Internet] 2013 Aug [cited 2025 Apr 6]; 60(4): 969-77. DOI: https://doi.org/10.1016/j.pcl.2013.04.010

Bhattacharyya N. The prevalence of pediatric voice and swallowing problems in the United States. Laryngoscope [Internet] 2015 Mar 1 [cited 2025 Apr 6]; 125(3): 746-50. DOI: https://doi.org/10.1002/lary.24931

Umay E, Eyigor S, Giray E, Karadag Saygi E, Karadag B, Durmus Kocaaslan N, et al. Pediatric dysphagia overview: best practice recommendation study by multidisciplinary experts. World J Pediatr [Internet] 2022 Nov 1 [cited 2025 Apr 6]; 18(11): 715-24. DOI: https://doi.org/10.1007/s12519-022-00584-8

Yang S, Chang MC. The assessment of the validity, safety, and utility of ChatGPT for patients with herniated lumbar disc: a preliminary study. Medicine (Baltimore) [Internet] 2024 Jun 7 [cited 2025 Apr 20]; 103(23): e38445. DOI: https://doi.org/10.1097/MD.0000000000038445

Kiratlioglu EU, Kiratlioglu Y, San AU. Evaluation of the quality and reliability of YouTube videos on pelvic pain in pregnancy. Gynecol Obstet Reprod Med [Internet] 2025 Feb 17 [cited 2025 Apr 7]; 1-7. DOI: https://doi.org/10.21613/GORM.2025.1557

Uz C, Umay E. “Dr ChatGPT”: Is it a reliable and useful source for common rheumatic diseases? Int J Rheum Dis [Internet] 2023 Jul 1 [cited 2025 Apr 7]; 26(7): 1343-9. DOI: https://doi.org/10.1111/1756-185X.14749

Özbek EA, Ertan MB, Kından P, Karaca MO, Gürsoy S, Chahla J. ChatGPT can offer at least satisfactory responses to common patient questions regarding hip arthroscopy. Arthrosc J Arthrosc Relat Surg 2024 Sep 5. DOI: https://doi.org/10.1016/j.arthro.2024.08.036

Jagiella-Lodise O, Suh N, Zelenski NA. Can patients rely on ChatGPT to answer hand pathology-related medical questions? Hand [Internet] 2024 [cited 2025 Apr 7]. DOI: https://doi.org/10.1177/15589447241247246

Kienzle A, Niemann M, Meller S, Gwinner C. ChatGPT may offer an adequate substitute for informed consent to patients prior to total knee arthroplasty—yet caution is needed. J Pers Med [Internet] 2024 Jan 1 [cited 2025 Apr 7]; 14(1): 69. DOI: https://doi.org/10.3390/jpm14010069

Can Gezer M, Armangil M. Assessing the quality of ChatGPT’s responses to commonly asked questions about trigger finger treatment. Ulus Travma Acil Cerrahi Derg 2025; 31(4). DOI: https://doi.org/10.14744/tjtes.2025.32735

Baumgartner C. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med [Internet] 2023 Mar [cited 2025 Apr 7]; 13(3): e1206. DOI: https://doi.org/10.1002/ctm2.1206

Wei Q, Wang Y, Yao Z, Cui Y, Wei B, Li T, et al. Evaluation of ChatGPT’s performance in providing treatment recommendations for pediatric diseases. Pediatr Discov [Internet] 2023 Dec 1 [cited 2025 Apr 7]; 1(3): e42. DOI: https://doi.org/10.1002/pdi3.42

Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol [Internet] 2023 Jul 1 [cited 2025 Apr 8]; 29(3): 721. DOI: https://doi.org/10.3350/cmh.2023.0089

Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel) [Internet] 2023 Mar 19 [cited 2025 Apr 7]; 11(6): 887. DOI: https://doi.org/10.3390/healthcare11060887

Cotugna N, Vickery CE, Carpenter-Haefele KM. Evaluation of literacy level of patient education pages in health-related journals. J Community Health [Internet] 2005 Jun [cited 2025 Apr 7]; 30(3): 213-9. DOI: https://doi.org/10.1007/s10900-004-1959-x

Published

2026-02-12

How to Cite

Kıratlıoğlu, E. Ülgen ., & Adıgüzel, E. . (2026). From Artificial Intelligence to Real-Life Practice: Can ChatGPT be a Guide About Pediatric Dysphagia?. International Journal of Child Health and Nutrition, 15(1), 37–43. https://doi.org/10.6000/1929-4247.2026.15.01.4

Issue

Section

General Articles