Comment

The opportunities and challenges of ChatGPT in the healthcare industry

AI-enabled virtual agents will be used to engage with patients and offer health advice, but personal information must be protected.

Credit: 

ChatGPT-4 has officially been released by OpenAI, the developer of artificial intelligence (AI) chatbot ChatGPT. This revolutionary technology is coming faster than most people in the industry currently recognise. GlobalData estimates that the total AI market will be worth $383.3bn in 2030, with a robust 21% compound annual growth rate (CAGR) predicted from 2022 to 2030. 

Although artificial general intelligence that can solve problems just like a human is still decades away, ‘good enough’ AIs such as OpenAI’s ChatGPT and ChatGPT-4, which can write original prose and chat with human fluency, are already here. This has the power to completely change healthcare. ChatGPT can be used to assist with bureaucratic tasks such as writing patients’ letters so that doctors can spend more time on patient interaction. More importantly, chatbots have the potential to increase the effectiveness and accuracy of preventive care delivery, symptom identification and post-recovery care. 

AI integration into chatbots and virtual assistants can motivate and interact with patients. AI can review a patient’s symptoms and recommend diagnostic advice and options such as virtual check-ins or face-to-face visits with a healthcare professional. This can reduce the workload for hospital staff, increase the efficiency of patient flow and reduce healthcare costs. During the Covid-19 pandemic, chatbots have been developed for contactless screening of Covid-19 symptoms in healthcare institutions and to help answer questions from the public. 

Chatbots can respond to patient queries about medical products and share brand news with customers. Pharmaceutical and medical device companies can benefit from AI-enabled virtual agents to automate customer service processes and give patients round-the-clock attention. Additionally, chatbots can be used for social purposes, increasing patient engagement and offering advice on how to maintain health after treatment. They can send automated reminders to take medications and re-visit information. 

However, usage of chatbots in patient care and medical research raises a number of ethical concerns. As increasing amounts of patient data are fed into machine-learning to improve the accuracy of chatbots, the patient information is vulnerable. Homomorphic encryption would be particularly useful in healthcare. This permits users to perform computations on encrypted data without first decrypting it, so chatbots could still learn from data without accessing patient identity information. 

The information provided by chatbots might also be inaccurate and misleading, depending on sources fed into the chatbots. Such inaccurate information could compromise the quality of healthcare. In its current format, ChatGPT has data up to 2021 and therefore does not provide the latest references. It also does not list references with answers, compromising the integrity of the evidence-based approach. With more data and information to draw from, future generations of ChatGPT will possess more accurate analytical and problem-solving powers. 

Regardless of the risks, AI will be used widely in the healthcare industry. GlobalData expects more regulations to govern the health uses of chatbots.