September 2025 • PharmaTimes Magazine • 32-33
// HEALTHCARE //
How AI health diagnostics could save lives – or create new risks
Like many other people, I’m looking forward to the day when I can call a self-driving taxi to drive me home from the pub, without having to listen to the driver moaning about immigrants or droning on about the time he had Bertrand Russell in the back of his cab.
I’m less excited about road accidents involving working remote vehicles that, I have no doubt, will become global media events.
We have become used to people being seriously injured or even killed as a result of careless or reckless human behaviour, but we will not countenance the same if the perpetrator is a machine.
For reasons both obvious and more nuanced, we regard automated misadventure as far more catastrophic than anything done by flesh and blood.
The same applies in a medical context, which is bad news when you consider that in the not-too-distant future, seeking healthcare attention from a human practitioner will be the exception rather than the norm.
The rise of home diagnostics, automated healthcare and now artificial intelligence, is moving us ever closer to a stage where for most public health delivery services, the initial triaging of all non-emergency cases will be done remotely.
The rise of AI-powered health apps that claim to diagnose conditions in real time is transforming how we approach healthcare. From symptom checkers to wearable ECG monitors and AI stethoscope apps, these tools promise early diagnoses and personalised healthcare at our fingertips.
The British NHS’s new ‘Doctor in Your Pocket’ initiative represents a significant leap into AI-driven healthcare, offering patients instant access to diagnostic tools and health advice via smartphones.
These tools promise to streamline triage, reduce waiting times and empower users with real-time insights into their health, from symptom checking to chronic disease management.
But as these technologies become more sophisticated, a critical question emerges: Are they genuinely helpful, or do they introduce new dangers? And what happens when they go wrong?
AI-driven health diagnostics are no longer science fiction. Today, apps can analyse heart rhythms, detect skin cancer from photographs and even predict potential health risks based on lifestyle data.
Wearable devices like smartwatches monitor vital signs continuously, alerting users to irregularities that might indicate serious conditions.
For many people, these tools offer unprecedented access to medical insights, reducing the need for frequent GP visits and enabling earlier interventions.
The potential benefits are significant. AI can process vast amounts of data far more quickly than a human doctor, identifying patterns that might otherwise go unnoticed.
In cardiology, for example, AI-powered imaging can detect subtle abnormalities in heart function, potentially preventing heart attacks before they happen. Similarly, AI algorithms in radiology can flag early signs of cancer in X-rays and MRIs with remarkable accuracy.
For patients in remote or underserved areas, AI diagnostics could be life-changing. A smartphone app that detects atrial fibrillation or diabetic retinopathy could bridge gaps in healthcare access, particularly in rural and remote areas where there are few medical professionals. The convenience is undeniable: why wait for a doctor’s appointment when an AI can provide instant feedback?
Yet, for all their promise, AI health tools come with serious risks. One of the most pressing concerns is misdiagnosis. AI models are only as good as the data they’re trained on, and if that data is flawed or incomplete, the results can be dangerously inaccurate.
A study by Stanford Medicine found that some AI diagnostic tools performed well in controlled lab settings, but faltered in real-world scenarios, where patient diversity and environmental variables introduced unpredictability.
False positives and false negatives are another major issue. An AI app that incorrectly reassures users that their chest pain is harmless could delay critical treatment, while one that falsely flags a benign mole as malignant might trigger unnecessary anxiety and even medical procedures.
Unlike human doctors, AI lacks the ability to contextualise symptoms: it doesn’t know if a patient has a history of health anxiety or if these symptoms align with common, non-threatening conditions.
Regulation is another grey area. Should AI diagnostic apps be classified as medical devices, subject to the same rigorous testing as traditional diagnostics? In many jurisdictions, the answer is unclear. The US Food and Drug Administration (FDA) has begun tightening oversight, but gaps remain. Without standardised validation, consumers may unknowingly rely on unproven – and potentially hazardous – tools.
Beyond accuracy, AI health tools raise thorny ethical and legal questions. If an AI app provides faulty advice that leads to harm, who is liable? The developer? The healthcare provider endorsing it? The user who misinterpreted the results? Legal frameworks have yet to catch up with these scenarios, leaving patients and providers in uncertain territory.
Data privacy is another major concern. Many AI health apps collect sensitive personal information – heart rate, sleep patterns, even genetic predispositions. If this data is mishandled or breached, it could be exploited by insurers, employers or malicious actors.
Imagine a scenario where an insurance company adjusts premiums based on AI-predicted health risks, or an employer screens job candidates using wellness data from their wearables. The potential for discrimination is alarming.
Then there’s the psychological impact. The ease of self-diagnosis can fuel ‘cyberchondria’ – a modern form of health anxiety where users obsessively research symptoms, often convincing themselves of worst-case scenarios. Unlike a doctor who can offer reassurance, an AI tool may simply present probabilities, leaving users spiralling into unnecessary fear.
So, where does this leave us? Will AI doctors replace general practitioners, or will they remain assistive tools? The most likely scenario is a hybrid model: AI handling routine diagnostics and data analysis while human doctors focus on complex cases, patient communication and emotional support.
Human oversight remains crucial. AI can identify a potential tumour, but a doctor must interpret that finding in the context of the patient’s overall health. AI can suggest treatment options, but a physician must weigh risks, discuss alternatives and consider the patient’s values and preferences.
The challenge for regulators, developers and healthcare providers is to strike a balance, harnessing AI’s potential while safeguarding against its pitfalls.
Robust validation, transparent algorithms and clear accountability frameworks will be essential. Patients, too, must approach AI diagnostics with caution, using them as supplements – not substitutes – for professional medical advice.
AI health diagnostics are here to stay, and their capabilities will only grow. They hold immense promise for improving healthcare accessibility and efficiency, but they also introduce new risks that cannot be ignored. The key lies in responsible development, rigorous oversight and informed usage.
As we integrate these tools into our lives, we must remember that AI is a powerful assistant – not an infallible authority. The best healthcare will always be a partnership between cutting-edge technology and human expertise.
Until then, the ‘doctor in your pocket’ should be treated not as a replacement for real medical care, but as a tool to enhance it, used wisely and with a healthy dose of scepticism.
Ivor Campbell is Chief Executive of Snedden Campbell