June 12
1.0
Section Text 1.1
Artificial Intelligence

Generative AI is Making Inroads in Healthcare

BY: SARAH HOFFMAN | JULY 10, 2023
Share
founderiepitching

Generative AI is in its infancy, yet patients are using this technology to understand complex medical issues, get second opinions, and receive support and motivation. Medical professionals also stand to benefit by saving time on documentation and emails. Early implementations in healthcare may offer learnings applicable to the financial services industry and beyond.

When

Thursday, April 27, 2023

9:00 a.m. – 10:00 a.m. ET

Where

Zoom

Meeting ID: 994 3158 6099
Passcode: 253444

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print

In April, my mother had an MRI before a critical surgery. In the past, I would have turned to Google or WebMD to look up unfamiliar medical phrases in the results – a process that easily took an hour. Not this time, though. I fed the entire MRI result into ChatGPT and within a matter of seconds, I received an explanation in a language that I (someone with no medical background) could understand. And I could even ask it follow-up questions.

And this was with a generic large language model (LLM), not one trained specifically for medical use. But those are coming too: Google’s Med-PaLM 2 passed the US Medical Licensing Exam with a score of 85%, placing it at an “expert” doctor level.1 (Compare this to ChatGPT which passed, but just barely, with a score of 60%.2) Despite the nascency of these tools—and well-documented failures3—they’re already finding roles for people with no medical background and medical professionals, alike.

Patients and Their Families

The web opened access to health information for billions of people. Even a decade ago, almost three-quarters of Americans were looking up health information online and over a third were self-diagnosing online. LLMs have the potential to supercharge this behavior by saving time, providing more understandable results, and even showing support.

Explaining medical terminology. One user of ChatGPT pasted the text of his MRI scan into the tool and found the explanation and treatment plan to be “pretty good” and similar to what his doctor said.5 Another user found the ChatGPT interpretation of his blood test results to be “incredibly helpful” in that it was in a language he could understand.6 Yet another user had the cerebral MRI results of a loved one translated from French to English so he could have a more informed conversation with the doctor.7 These aren’t isolated incidents. In April, researchers comparing postoperative care instructions for eight common pediatric procedures provided by ChatGPT, Google search, and Stanford University found that Stanford’s was the best, yet ChatGPT and Google both scored better than 80% on understandability and actionability of the instructions for patients of different backgrounds and health literacy levels.8 While ChatGPT didn’t outperform either source, the researchers said it could offer value given that ChatGPT can customize responses to different literacy levels.

Offering a second opinion. Recently, GPT-4 was reported to have saved a dog’s life.9 The dog was being treated for a tick-borne disease, but his gums were very pale and bloodwork indicated severe anemia. After a vet was unable to help the dog, its owner fed the chatbot the dog’s symptoms and test results and GPT-4 suggested that Immune-Mediated Hemolytic Anemia was the best fit for the dog’s troubles. Another vet confirmed this diagnosis and began treatment. But just as we’ve all been warned about overreliance on WebMD, LLMs also miss diagnoses and should not be solely relied upon. One emergency room doctor tested ChatGPT on 35 to 40 patients who showed up in the ER and found that ChatGPT worked well as a diagnostic tool only when provided perfect information and the patient had a classic presentation of an issue.10

Supporting and motivating. Suicide is the second leading cause of death in the US for those between ages 10 and 35.11 At the same time, the US is struggling with a shortage of mental health providers.12 Some are building AI tools to try and fill the gap. Online emotional support chat service, Koko, augmented its human volunteers with GPT-3 responses (which humans could edit).13 According to a co-founder, those who received the co-written GPT-3 responses rated them “significantly higher” than those written by humans alone and response times dropped 50% to under a minute. Users in a ChatGPT subreddit praised the usefulness of using the chatbot before a therapy session to organize their thoughts, and others mentioned that it was useful even on its own, given the difficulties of finding an affordable therapist.14 Launched in May by Inflection AI, Pi (“personal intelligence”) aims to be “a kind and supportive companion that’s on your side”.15 However, using LLM technology to supplement mental health services should be done cautiously. In March, a Belgian man died by suicide after chatting with AI chatbot Chai.16

Medical Professionals

Even hospitals are beginning to post job openings for AI prompt engineers: someone to design, develop, test, and evaluate AI prompts while also refining general purpose language models for healthcare specific applications.17 Experimentation abounds, but already the most promising uses of this emerging technology for medical professionals include:

Educating. Just as we’ve seen students flock to ChatGPT to write their term papers, the technology could prove useful to medical students and professionals as they learn new topics. A medical student used ChatGPT to study the muscles of the inner ear. He pasted details about the muscles, and ChatGPT returned the information in a question/answer format useful for cue cards.18 In the book The AI Revolution in Medicine, the authors discuss the potential of GPT-4 for medical research, as it can read highly technical research papers and then engage in “remarkably sophisticated discussions.”19 However, another note of caution: An emergency medicine physician at Brigham and Women’s Hospital in Boston found when testing ChatGPT that it was correct about a diagnosis; however, as evidence for the cause of the diagnosis, it cited a reference paper that didn’t actually exist.20

Assisting with documentation. That physician’s experience with ChatGPT wasn’t all bad. When requesting a chart for a fictional patient with a cough, he noted that the response was “eerily good.” The tool also holds promise for assisting physicians with one of their biggest burdens: notetaking. Over half of healthcare providers report that excessive documentation is one of the contributing factors leading to burnout.22 To help, Nuance is piloting a GPT-4-enabled note-taking program that drafts clinical notes for in-person or telehealth visits; physicians must still review and approve the notes after-the-fact.23 Another company, Glass Health, developed software based on ChatGPT that uses a virtual medical textbook as its main source of facts. Doctors using the tool enter a one-line patient summary and Glass AI will auto-generate a draft clinical plan.24

Replying to patient messages (with compassion). Doctors are often known for their poor bedside manner. As it turns out, generative AI can be useful here, too. A study from April found that healthcare professionals preferred ChatGPT’s responses to questions posted for medical professionals on Reddit’s AskDocs subreddit 79% of the time, as they were not only higher quality but also more empathic. Also in April, UC San Diego Health and University of Wisconsin-Madison’s health system began piloting GPT-4 within Epic, a provider of Electronic Health Records, letting OpenAI’s services read and draft responses to patient messages.26 Since ChatGPT is not constrained by time the same way human doctors are, it can often generate better, more empathic answers. This could be helpful for hospitals like UC San Diego, which saw patient messages jump from 50,000 a month before the pandemic to over 80,000 a month after.27

Why It Matters to Financial Services

There are a lot of similarities between the health and financial industries, including high stakes, strict regulations, confusing terminology and awareness of risks involved. And, just as there are new asset types, investing models, and fintechs cropping up in financial services, healthcare is fast-changing, too. To wit, there are currently over 400,000 clinical trials worldwide.28 These similarities mean healthcare may offer learnings for applying LLMs in financial services. For example, we’ve seen advantages in more people being able to understand their own complex health situations and even receive advice from LLMs – they will likely seek similar information about their finances. We can also learn from the limitations, such as ChatGPT needing perfect information for diagnoses, inventing evidence, and possibly exerting too much influence on its users. Understanding these tradeoffs can help identify the best use cases for financial services as well as understand how to apply them in the existing regulatory framework.

And just as the role of the healthcare professional may change, so will the roles of the financial advisor, asset manager, phone rep, and on and on. For physicians, this technology helps them focus on their primary responsibility – patients, not documentation – and therefore they may be quick to embrace it. Perhaps we will see a similar trend for financial services professionals – embracing technology to answer basic customer queries and write explanatory materials so that they can focus on critical aspects of their roles.

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity and any other third parties mentioned are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.
1 Our latest health AI research updates. (2023, March 14). Google.
2 Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., ... & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS digital health, 2(2), e0000198.
3 https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html
4 https://www.pewresearch.org/internet/2013/01/15/information-triage/
5 https://twitter.com/corp/status/1644535332820623360
6 https://twitter.com/hemeon/status/1650917569036173312
7 https://twitter.com/yanda/status/1652050041039884288
8 Ayoub NF, Lee Y, Grimm D, Balakrishnan K. Comparison Between ChatGPT and Google Search as Sources of Postoperative Patient Instructions. JAMA Otolaryngology Head Neck Surg. Published online April 27, 2023. doi:10.1001/jamaoto.2023.0704
9 Gibbs, A. (2023, March 28). Man details how GPT-4 AI software helped save dog’s life. Newsweek.
10 https://www.fastcompany.com/90863983/chatgpt-medical-diagnosis-emergency-room
11 https://www.cnn.com/2023/04/13/health/suicide-rates-2021-cdc/index.html
12 Weiner, S. (2022, August 9). A growing psychiatrist shortage and an enormous demand for mental health services. Association of American Medical Colleges.
13 Ingram, D. (2023, January 14). AI Chat used by mental health tech company in experiment on real users. NBC News.
14 Clark, P. A. (2023, March 8). AI bot therapy with ChatGPT, despite cautions, finds some enthusiasts. Axios.
Reddit, JustinCord. (2023, February 15). Experience Using ChatGPT for Therapy.
15 Griffith, E. (2023, May 3). My Weekend With an Emotional Support A.I. Companion. The New York Times.
16 Xiang, C. (2023, March 30). “He Would Still Be Here”: Man Dies by Suicide After Talking with AI Chatbot, Widow Says. Vice.
17 Bruce, G. (2023, April 10). Boston Children’s Hospital hiring AI prompt engineer for ChatGPT. Beckers Hospital Review.
18 Using AI as a Medical Student? ChatGPT Changes Everything. (n.d.). YouTube.
19 Lee, P., Goldberg, C., Kohane, I. (2023). The AI Revolution in Medicine: GPT-4 and Beyond.
20 MD, J. F. (2023, January 11). Fun with OpenAI, medical charting, and diagnostics. (Also: I just got lied to by a bot). Inside Medicine.
21 MD, J. F. (2023, January 11). Fun with OpenAI, medical charting, and diagnostics. (Also: I just got lied to by a bot). Inside Medicine.
22 Diaz, N. (2023, May 3). Can ChatGPT get rid of healthcare’s $1 trillion administrative burden cost? Beckers Hospital Review.
23 Schubert, C. (2023, March 20). Microsoft subsidiary Nuance is using GPT-4 for a new physician notes app. GeekWire.
24 https://www.npr.org/sections/health-shots/2023/04/05/1167993888/chatgpt-medicine-artificial-intelligence-healthcare
25 Ayers JW, Poliak A, Dredze M, et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. Published online April 28, 2023. doi:10.1001/jamainternmed.2023.1838
26 https://www.wsj.com/articles/dr-chatgpt-physicians-are-sending-patients-advice-using-ai-945cf60b
27 https://www.wsj.com/articles/dr-chatgpt-physicians-are-sending-patients-advice-using-ai-945cf60b
28 Trends, Charts, and Maps - ClinicalTrials.gov. (2019). Clinicaltrials.gov.
1092752.1.0
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.

Related Articles

Artificial Intelligence
By: Sarah Hoffman | December 6, 2023
Sarah Hoffman, VP of AI and Machine Learning Research in FCAT speaks with Juliette Powell, a researcher, entrepreneur, and keynote speaker at the intersection of technology and business, about how organizations can embrace responsible technology.
12/06/2023
Article
Artificial Intelligence
By: Sarah Hoffman | November 29, 2023
Generative AI still shows great promise, but its transformational power for large enterprises has so far been limited. While there are numerous challenges, inaccuracy, cybersecurity, intellectual-property infringement, and regulatory compliance are the four most commonly cited risks that organizations are working to mitigate. The good news: solutions are coming that will make it easier for enterprises to move forward over the coming year.
11/27/2023
Article
Artificial Intelligence
By: FCAT Quantum Incubator | November 21, 2023
Artificial Intelligence (AI) has become an integral part of our lives, shaping industries ranging from healthcare to finance and everything in between. However, as AI systems become increasingly complex, it becomes crucial to understand how they reach their decisions. This is where Explainable AI (XAI) and Simple Rules come into play. In this article, we will explore the concept of XAI and delve into the use of expressive Boolean formulas to make AI more transparent and interpretable.
11/21/2023
Article

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. FCATalyst.com is independent of fidelity.com. Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.


This is for persons in the U.S. only.


245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved | FCATalyst.com


Terms of Use | Privacy | Security | DAT Support