Image courtesy of Freepik.
In China, receptionist nurses—nurses who are the first line of service for patient concerns—must attend to almost a case per minute. This fast-paced, high-volume workload has been linked to a decline in both patient satisfaction and nurse retention rates. To address the issue, some have proposed introducing artificial intelligence (AI) personal assistants that, rather than nurses or doctors, may help answer patient questions. They believe this could free up nurses and allow patients to pass through the hospital more efficiently.
In a recent article published in Nature Medicine, a team of researchers at Yale University and Peking Union Medical College led by Erping Long, an assistant professor in the Institute of Basic Medical Sciences at Peking Union Medical College, explored that possibility. They developed a chatbot powered by a large language model—meaning a chatbot that uses what it has learned from large amounts of text to generate human-like responses—using 35,000 real-world conversations between patients and receptionist nurses. The chatbot can address questions and concerns from patients that don’t require a doctor or a nurse, including triage, patient support, and general administrative duties.
The researchers found that using the chatbot as an assistant to receptionist nurses in real-world settings was successful. In a randomized controlled trial at a hospital in Wuhan, China, patients reported higher satisfaction rates when nurses were assisted by chatbots compared to when they were not. Furthermore, the chatbot was able to resolve a wide range of patient queries. “Ninety-five percent of the responses can be completely handled by our technology,” Long said.
Although chatbots could improve the dynamic between nurses and patients, some caution against the use of AI assistants in healthcare. AI chatbots are trained on high volumes of sensitive patient data, which can be accessed and used against hospitals and patients. Furthermore, large language models are prone to hallucinations, meaning that the chatbot could provide misleading information but present it as a fact. These concerns around privacy and accuracy may limit the large-scale implementation of AI in hospital settings.