July/August 2023 • PharmaTimes Magazine • 10-11

// COVER STORY //


Oh my, it’s AI!

How can we create trust in AI across the vast pharmaverse?

Image

The proliferation of AI/ChatGPT resources during the initial months of 2023 has raised several questions across the pharma industry around its use and, indeed, misuse.

As recently as March, commentary around its effectiveness or potential was circulating around the internet. In December of 2022, Sam Altman, CEO of OpenAI was quoted as saying: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.”

We were left with questions. ChatGPT and other AI tools, such as Google Generative AI have the potential to play a positive role in pharma. Google Generative AI makes it easier for developers to build on their platform and pulls answers from medically approved content.

At the time of the study, there are still limitations around AI and compliance with HIPAA, creating some difficulties around other research. Now, a recent cross-sectional study of 195 randomly drawn patient questions from a social media forum has allowed us to understand the implications of using AI and better understand its future use.

The study asked – ‘Can an artificial intelligence chatbot assistant, provide responses to patient questions that are of comparable quality and empathy to those written by physicians?’

In light of the rise of virtual healthcare, there’s been a rise in patient messages putting more pressure on already burnt-out healthcare professionals and the idea that tools like ChatGPT and Google’s Bard could improve not only the patient experience but also the lives of healthcare professionals offers a glimmer of hope.

This article explores the findings of a cross-sectional study and discusses the potential of AI, particularly ChatGPT, in the pharmaceutical industry.

Addressing physician burnout

Physician burnout has become a growing concern in the healthcare industry, with limited time for patient interaction and information overload being key contributors.  Post-pandemic, studies including this one, which was published in Mayo Clinic Proceedings found that satisfaction with work-life integration dropped from 46.1% in 2020 to 30.2% in 2021.

Furthermore, depression scores rose from 49.5% in 2020 to 52.5% in 2021. Physicians are burnt-out, lack time with patients and spend much of their time on documentation instead of directly with patients. Healthcare providers (HCPs) are eager to have accurate and patient-friendly information at their disposal to effectively communicate with patients.

ChatGPT has proven to be a valuable tool in addressing this challenge by providing personalised and easily understandable responses. For instance, a physician can quickly retrieve information on reversing type 2 diabetes and relay it to the patient, leading to improved patient satisfaction and more efficient appointments.

Some organisations, however, remain cautious about the accuracy and vetting process of AI-generated messages, preferring medically reviewed sources like the Cleveland Clinic’s website. Striking the right balance between AI assistance and trusted information sources is crucial to building trust and ensuring patient safety while alleviating the work of the HCP.

Better Patient Care

The study found that ‘investments into AI assistant messaging could affect patient outcomes. If more patients’ questions are answered quickly, with empathy, and to a high standard, it might reduce unnecessary clinical visits, freeing up resources for those who need them.” This is solving one of the biggest challenges of healthcare customer services.


‘When AI gets a few diagnoses wrong, society will easily lose faith in the system, even though it’s proven that AI is already outperforming doctors’


When people are diagnosed with a condition, they’re in a high state of emotion, they’re overwhelmed and understandably, in need of attention and care. The system, as is has failed them consistently. Traditional systems often fall short, as physicians and administrators lack the time and knowledge to handle frequent patient queries.

Web 1.0 and 2.0 platforms are rife with inaccurate information, leaving patients frustrated and misinformed. Enter AI, which can very easily answer the most frequently asked questions, providing accurate answers with a soothing voice, or at the very least in plain English. Even more importantly, AI answers are free and accessible.

Like the growth of TikTok, patients can access solutions without waiting hours or days to speak to a credible source.

AI-assisted messaging for clinicians

The study also aimed to assess whether an AI chatbot could provide responses to patient questions that are comparable in quality and empathy to those written by physicians. The results revealed that while the performance of chatbots in a clinical setting remains uncertain, they could be used to draft messages for clinicians or support staff to edit.

This approach aligns with current message response strategies, where canned responses or drafts by support staff are common. Imagine you have just been diagnosed with Triple Negative Invasive Ductal Carcinoma. Sitting in the doctor’s office, you may be too in shock to understand or take in much, if any information. You go home and jump onto a search engine – ‘What does this mean, how serious is it?’

Faced with answers about treatment options, and confusing medical terms you may get overwhelmed and AI has the power to quickly help patients and loved ones make sense of their diagnosis and what steps to take next.

As well, AI can quickly help interpret the meaning of confusing conditions and other clinician terminology so they can learn from the comfort of their own home. By utilising AI-assisted messaging, clinicians can save time, enhance productivity and focus on more complex tasks, ultimately improving overall communication skills.

Moreover, consistent responses and personalised AI-written drafts can aid in better patient understanding, leading to more efficient consultations and reduced workload for healthcare professionals.

Personalised care

Based on established relationships, personalisation in responses remains an unexplored aspect in academia. Additionally, the study did not evaluate the chatbot’s ability to extract details from electronic health records, which can provide valuable context in healthcare consultations. To overcome these limitations, AI platforms such as Google are actively working on improving their systems.

They are developing flexibility and openness, allowing businesses to set parameters for the AI’s advice and continuously adjust materials for accuracy. Moreover, investments in AI platforms that leverage patient data can enable precise recommendations tailored to individual needs, ultimately enhancing patient outcomes through data-driven insights.

For example, we partnered with Google Looker to leverage key data points about many different patients affected by Alzheimer’s and the platform was able to make treatment recommendations. So, what does this mean? The more data that AI platforms have, and the more oversight to ensure accuracy the more confidence we will have in AI’s ability to make recommendations. It’s really a matter of time.

The final analysis

There are still limitations to using AI in patient care. Statistically speaking, AI is outperforming doctors in diagnosing patients accurately. In this peer-reviewed study, the average diagnostic accuracy of doctors was 71.40% while the counterfactual algorithm scores higher at 77.26%. This set it in the top 25% of doctors and achieved ‘expert clinical accuracy’.

We aren’t out of the dark yet. When AI inevitably gets a few diagnoses wrong, society will easily lose faith in the system, even though it’s proven that AI is already outperforming doctors. While trust is growing, we will need to identify ways to build and sustain trust with AI. Further AI hallucinations can run the risk of providing rogue answers that are not accurate. This is an unresolved issue that can hinder the potential mass adoption of AI for regulated organisations.

Overall, it’s a hopeful world we live in. What will happen to patient outcomes and addressing physician burnout when AI is used to help bridge the messaging gap? How quickly will we see physicians leveraging these tools? How quickly can we demonstrate improved patient outcomes as a result of getting a diagnosis more accurately and faster?

While the future is bright – the way we utilise these tools is the key to how helpful they become.


Steve Peretz is Group Director, Health Experience & Product Strategy at Appnovation. Go to appnovation.com