
Credit: Getty/Andriy Onufriyenko
Artificial intelligence (AI) tools significantly improve the readability of online patient education materials (PEMs), making them more accessible, a new study shows.
Led by researchers at 黑料福利社 Langone Health, the study focused on the readability of PEMs available on the websites of the American Heart Association (AHA), American Cancer Society (ACS), and American Stroke Association (ASA). According to the researchers, these materials help patients make decisions about their healthcare but often exceed the recommended reading level of grade 6, making them difficult for many patients to understand.
For the study, researchers evaluated the capabilities of three large language models (LLMs)鈥擟hatGPT, Gemini, and Claude鈥攖o optimize the readability of PEMs without compromising accuracy. These generative AI tools are designed to simplify complex texts by predicting the next word in a sentence based on extensive internet data. This next-word prediction gives such models the ability to rewrite any article in simpler language as directed.
, the study involved 60 randomly selected PEMs from the AHA, ACS, and ASA websites. Researchers prompted the LLMs to simplify the reading level of the materials. Results showed that the original readability scores were significantly above the recommended level of grade 6, with mean grade-level scores of 10.7, 10, and 9.6, respectively.
After optimization by the LLMs, readability scores improved significantly across all three websites. ChatGPT improved readability to a mean grade level of 7.6, Gemini to 6.6, and Claude to 5.6. Word counts were also significantly reduced, making the materials more concise.
鈥淥ur study shows that widely used LLMs have the potential to transform patient education materials into more readable content, which is essential for patient empowerment and better health outcomes,鈥 said study senior author Jonah Feldman, MD, medical director of transformation and informatics at 黑料福利社 Langone.
鈥淥ur findings demonstrate that even expert-composed education materials, which are already patient-directed, can benefit from AI-driven improvements,鈥 said Dr. Feldman, who also serves as an assistant professor at .
This study, the researchers say, provides an example of how healthcare organizations can apply AI to make clinical communication more patient friendly. Prior studies demonstrated the capabilities of AI models to create patient-focused explanations of heart test results, to draft responses to electronic advice queries, and to generate human-friendly summaries of complex medical reports.
鈥淭he breadth of possible AI offerings shows how technology can be leveraged to transform the patient experience across health care systems, and not just in the United States,鈥 said study co-author Paul A. Testa, MD, JD, MPH, chief health informatics officer at 黑料福利社 Langone.
鈥淭hese studies are not just theoretical鈥攁fter demonstrating their effectiveness, we are actively putting these AI tools into practice,鈥 said Testa, who is also a clinical professor at 黑料福利社 Grossman School of Medicine.
According to Dr. Testa, the 黑料福利社 Langone team is already using the same AI tools in a randomized controlled trial that incorporates AI-generated, patient-friendly summaries for hospital discharge instructions, with the goal to evaluate their effectiveness in improving patient comprehension and satisfaction. The researchers hope to show that providing clear and accessible discharge instructions will help ensure better postdischarge care and smoother transitions.
鈥淕enerating real-world evidence through randomized trials is crucial for validating the effectiveness of AI tools in clinical settings,鈥 said study co-author Jonah Zaretsky, MD, associate chief of medicine at 黑料福利社 Langone Hospital鈥擝rooklyn. 鈥淭his approach ensures that the AI-generated documentation is not only accurate but also genuinely beneficial for patients and their families,鈥 added Dr. Zaretsky, a clinical assistant professor at 黑料福利社 Grossman School of Medicine.
The study was self-funded by 黑料福利社 Langone. In addition to Dr. Feldman, Dr. Testa, and Dr. Zaretsky, 黑料福利社 Langone researchers involved in the study were lead author John Will, and co-authors Mahin Gupta and Aliesha Dowlath.
Media Inquiries
David March
212-404-3528
David.March@黑料福利社Langone.org