This is the third in a series of articles where I will explore the use cases where ChatGPT will improve healthcare for physicians and patients.
If you haven’t already read the first two articles:
You can read the fourth article afterwards: Will ChatGPT Replace Doctors?
How ChatGPT will affect Healthcare
There is quite a stir in healthcare about the potential of ChatGPT. Many senior leaders in health systems, health payers, health tech and other healthcare companies are asking the question how ChatGPT will affect their business. There is a sense that ChatGPT will be important but not a lot of insight about how and what to do.
While this is well intentioned and an important question, there is a lack of involvement from people who understand data science, the technology behind ChatGPT and the intricacies of how healthcare works.
Having spent many years generating insights for clinicians and patients using machine learning models and building AI enabled chatbots, my goal here is to explain ChatGPT in simple terms so even non-technical folks can understand it well. And to explore ways in which the technology behind ChatGPT will change healthcare so instead of hyping or fearing ChatGPT we can learn how to use this very powerful tool to help address problems in healthcare.
Healthcare Problems that ChatGPT can solve
Healthcare, like any other field or industry, has a number of problems that are not solved yet. The following are some examples of problems that ChatGPT can help solve:
- Clinicians, administrators and health consumers have a hard time expressing their question using the current user interface tools (buttons, drop downs, text boxes etc.)
- The answers and results returned are the same regardless of who is using the system and what their knowledge and experience is. So frequently the answer is either too complicated or too simple.
- There is a lot of medical literature and unstructured text in healthcare. It is not practical for any person to leverage all the knowledge out there. Today, we have a small number of researchers try to summarize some of the data for specific purposes. This process is not scalable and is very expensive to make it widespread.
(This is just a short list. In future articles I will share other healthcare problems that ChatGPT can help us solve)
How ChatGPT can solve these problems
The technologies behind ChatGPT can:
- Replace the current user interface of buttons, drop downs and text boxes with a more natural language based interface to ask questions and get answers.
- Personalize the results for the user (We can provide answers to the same question in different ways to a primary care doctor, to a specialist, to a nurse, to a teenager, to a young mother etc.)
- Make the existing medical knowledge more accessible and available to a wider audience. ChatGPT can read medical literature (and other unstructured or semi-structured text like clinical notes and images) and answer questions from clinicians, administrators and patients using this knowledge.
1. Replace Current User Interfaces With Natural Language Interfaces
A user interface is what a user uses to ask questions from the application and to understand the results.
An application is the tool you use to interact with a machine e.g., a mobile app, a desktop app etc.
Problems with Current User Interfaces
To start out, let’s take a look at how users provide input to applications to ask their question:
Then the application returns the results to the user:
NOTE: There are some variations to this model that applications can use (e.g., what is done by business layer vs data layer) but we will keep it simple for now.
The five main challenges that exist in this model:
- Users have to define their question using a non-natural language consisting of UI elements like buttons, drop-downs, drag-drop etc. If you’ve ever watched a newbie to computers you can see that this is a strange way of expressing your question. Worst yet, each application creates its own UI language that users have to learn.
- Applications use the same UI language for every type of user. This results in either a UI that is simple for the simple user but not advanced enough for the power user or vice versa.
- Results are expressed in the non-natural language consisting of UI elements like tables, lists, charts etc. This requires the user to parse this form to actually get the answer. For example, if I am trying to find the health procedure that has the highest total cost for my health system, I have to read through lists and tables ordered by number of times a procedure is used and the cost of that procedure to calculate the total cost.
- Results are expressed in the same UI language for every user. Some people can understand results in list format while others can better understand in graphical format. Some people want to just get the answer they want while others want to understand how the answer was derived.
- This conversation between the user and the application is a “one and done” conversation and not a “back-and-forth” conversation. As humans we rarely provide our questions in sufficiently clear and specific terms. We tend to instead ask a general question, see if the answer is good enough and if not, we provide additional clarity and specificity.
ChatGPT Powered Interfaces
ChatGPT technologies enable a new, more human, user interface. Blue boxes show how ChatGPT interfaces are different from the UI interfaces.
Applications using ChatGPT technologies can allow users to ask the questions in natural language (e.g., English, Spanish etc) e.g., “Is cancer deadly”?
And the results can be provided to the user in the form of answers in natural language that are personalized to the user. For example, the answer to “Is cancer deadly?” would be very different for an oncologist vs a patient.
Existing applications can add natural interfaces by just replacing their user interfaces. Their existing business layer and data layers can continue to function like they do today.
2. Personalize the results for the user
Due to ChatGPT’s capabilities around understanding language and being able to work with language (summarizing, inferring, transforming, expanding, personalizing and understanding/expressing emotion), the responses can be highly tailored the user.
For example, a doctor may want a much detailed explanation of a topic than a patient would.
A patient expressing their question in a panicked emotion can be directed to Emergency Room (ER) vs a patient expressing their question in normal emotion can be directed to Urgent Care (UC). Note that there are many other factors than emotion in determining where a patient should go and humans express emotions differently for the same situation so I’m simplifying the problem here.
3. Make existing medical knowledge more accessible and available to a wider audience
Today a new medical article is published every 26 seconds. No medical professional has the time to keep up with all the medical knowledge being created.
Today our solution is to have a small set of people that read the medical knowledge and then create summaries and best practices for other medical professionals. This process is very hard to scale and is very expensive hence a lot of medical knowledge remains inaccessible to medical professionals.
Much of this medical knowledge is written for medical professionals and is very hard to understand for people like patients that don’t have medical training. Efforts to convert medical knowledge into a patient friendly form remain limited and expensive.
ChatGPT can enable us to make all the medical knowledge accessible by any medical professional when they need it and in a form that allows them to consume it quickly.
ChatGPT’s ability to summarize, expand and transform language can make medical knowledge available to patients. Of course, this can have risk associated with it so healthcare industry will need to figure out the rules here. The good news is that ChatGPT technologies have a way to control what kind of responses are provided based on the knowledge.
There is also a wealth of knowledge trapped in clinical notes today. These are free text (or semi-structured) notes that a doctor writes after they see a patient. Our current attempts to understand these clinical notes using technology have been very limited. As anyone who has read these notes will know the “language” used in these notes is barely English so we’ll need specifically trained ChatGPT models to understand clinical notes.
ChatGPT’s GPT-4 release added capabilities to understand images and graphics. We have a plethora of images in healthcare like x-rays, wound care photos and dermatology photos. ChatGPT has the potential in the future, just like with clinical notes, to allow us to unlock the knowledge in photos and other images.
Limitations and Risks of ChatGPT in healthcare
Healthcare is, ultimately, about making decisions that are either life-or-death or at least have a potential to have serious impacts on people. As we use ChatGPT in healthcare we will need to be aware of these and have appropriate mitigations in place to reduce the risks:
1. Bias
A ChatGPT model is only as good as the knowledge it used to learn. All knowledge we have today has bias. ChatGPT models will tend to promote the bias that exists in our knowledge stores.
The reality is that humans are really bad at eliminating bias too. So ChatGPT does not create this problem but our solutions to handle bias may need to be different.
2. Accuracy
All computer systems (and all humans too) have a problem with accuracy. Just think about all the COVID myths that were prevalent a couple of years ago.
Some of the accuracy problems with ChatGPT (“hallucinations”) are due to the knowledge being unclear, some are due to the questions asked being unclear or not specific and some are just issues in the technology itself since it is still relatively new.
3. Safety
ChatGPT responses have the potential to be harmful and degrading. For example, ChatGPT can provide “adversarial prompts” to users.
Safety is, of course, a major consideration in healthcare. This is likely the area where a lot of work will need to be done to train ChatGPT models specifically for healthcare.
Luckily Generative AI and Instruction Tuned LLMs provide us tools that can be used to improve the safety (and bias and accuracy) of ChatGPT in healthcare but that will be a topic for a future article.
4. Privacy and Compliance
Before using a service like ChatGPT or Microsoft Azure OpenAI service (which runs OpenAPI ChatGPT underneath) or Google’s Bard or another other option, healthcare organizations should do a privacy and compliance review.
It is not clear whether the OpenAPI ChatGPT service signs HIPAA Business Associates Agreement (BAA). Microsoft Azure OpenAI service indicates that it is covered by the Azure BAA but this should be confirmed by a healthcare organization before sending any PHI.
Conclusion
With any new technological advance, there will be people claiming that it will solve a lot more problems than it will eventually do and there will be people who (mistakenly) will point to existing issues as the reason the technology will not succeed.
The reality is always in the middle. Technologies behind ChatGPT have a great future in healthcare to help us solve some major problems that we are struggling to solve in healthcare today. However we will need to spend time tuning ChatGPT for healthcare so we can define the right use cases and reduce the risks of bias, accuracy and safety.
I’ll be writing more articles on how we can do that.
Continue to Will ChatGPT Replace Doctors?