October 2025 • PharmaTimes Magazine • 24-26

// HOSPITALS //


Code bed

How hospitals and health systems can maintain trust in a world of rapid AI adoption

Image

The rapid adoption of AI tools across the healthcare industry has vast potential to save both providers and payers time and money, simultaneously improving patient encounters and outcomes.

Yet rapid adoption necessarily requires additional duties: namely, to use AI tools responsibly (AI governance), and to communicate to all parties, especially patients, how they are being used (AI transparency).

Among healthcare industry leaders, here are some of the thorniest issues at the forefront of AI governance and transparency discussions.

Communicating AI policy

Every AI adoption debate begins with the need to balance risk and reward. Low-risk, high-reward technology is usually quicker to be adopted by payers and providers. (This is true outside the healthcare industry as well.)

Effectively communicating the (hopefully) low-risk nature of your organisation’s AI-driven practices and procedures to clinicians and patients might not be required by law but helps build trust.


‘Implementing patient-facing AI tools demands more trust
from stakeholders, especially those used directly in the chain of clinical decision-making’


In any instance of AI adoption, having a ‘human in the loop’ is industry standard as a means of mitigating risk. That assures stakeholders that you employ a real person who can take ownership of any issues that arise with new technologies.

There are many examples of low-risk AI software that can save payers and providers time and money without directly impacting clinical decision-making or supporting patient treatments. Technologies that hasten or inform claims decisions, billing processes and other administrative functions are far removed from the doctor’s office or operating theatre.

Implementing patient-facing AI tools demands more trust from stakeholders, especially those used directly in the chain of clinical decision-making. It is fair for patients to expect the provider to disclose when their data – a recorded conversation with a doctor, for example – is being processed by an AI tool.

But how much do patients need to know about how each AI tool is being applied to their treatment, as opposed to trusting clinicians and their associated hospital or healthcare system to use AI tools responsibly? How much transparency should patients demand?

To opt in or opt out

Patient trust is integral to a hospital or health system brand’s value. Part of any organisation’s risk/reward calculus is whether introducing an AI tool might jeopardise that trust. For a CEO or CMO, communicating how your hospital uses AI has become essential to maintaining or enhancing that trust.

Consent forms are standard practice in a variety of medical settings. Legislators require patients to provide written consent as a matter of routine prior to everything from dental check-ups to X-rays to major surgeries. Asking patients to consent to a hospital’s use of AI could be as easy as adding a new paragraph into an existing form. Yet the easiest solution might not be the best.

Consent forms generally perform two functions: (i) seek the patient’s informed consent by informing patients about a procedure and its alternatives, and then ask the patient’s permission for the treatment; and (ii) outline how the patient’s protected health information will be handled under the Health Insurance Portability and Accountability Act of 1996.

A consent is a good way for a provider to disclose with as much transparency as possible (i.e. without revealing trade secrets) how patients’ data might be used by AI tools. Patients are already accustomed to signing consent forms before they enter the clinician’s office, so what is the issue?

There are generally two.

First, this process expects too much from patients. Before implementing the AI tool, the provider should have vetted the tool either through its AI governance process or similar IT or vendor management function.

That means the provider has, at least, gathered technical information about the AI tool, analysed that information to determine that the AI tool will perform as intended, assessed relevant security risks and other related risks, identified risk-mitigation strategies and ultimately concluded that the AI tool is appropriate for use.
Why would we expect an individual patient to be able to perform a similar analysis?

Is it fair to expect patients – most of whom are ostensibly dealing with a health concern already – to comprehend and make thoughtful decisions about what is in an AI consent form prior to their treatment?

A hospital or health system might reasonably assume patients to trust any AI tools that touch their care are used responsibly, rather than give them the burden to determine on their own whether or not it is appropriate.

Second, even if a patient is a capable AI expert, opting out of the use of AI in the patient’s course of treatment may not be practical.

A hospital or health system that deeply integrates AI tools into its process might have a hard time accommodating patients who do not consent to a policy. Opting out of the use of AI is not as simple as turning off a computer.

Opting out is not an unreasonable option for patients who do not grasp the breadth and depth of AI technology. But the patient would need to understand that opting out, to the extent possible, would likely negatively impact the patient encounter, treatment and perhaps the outcome. Would it not be better for the provider to communicate to the patient that AI is incorporated into processes, designed to improve their treatment and that every AI tool deployed by the provider has been through a governance review, requires ongoing monitoring and includes ‘human-in-the-loop’ standards?

These issues point to a larger question facing the industry amid rapid AI adoption: is there a better medium for transparency than a consent form? If so, is there a better (or additional) medium for establishing trust?

Public/private collaborations

How should an organisation communicate to patients they have done the work necessary to determine ‘we use AI responsibly’? Should legislators be involved in that determination? If so, is it a state or federal concern?

To this point, states have taken the lead on AI-related legislation. Where state laws diverge, complying with multiple sets of regulations is an obstacle for organisations that do business in multiple states. Does that foster more responsible use of AI, distract from it or, even worse, create a false sense of responsible use among the various stakeholders?

Legislators – not just hospitals or health systems – are wrestling with the question of how much patients must give consent to on their own. But the question of whether individual businesses, state legislatures or the federal government should be the final authority about the limits of AI consent forms is a false choice.

CHAI, the Coalition for Healthcare AI, is an example of a private entity composed of industry collaborators for the purpose of establishing shared standards for responsible AI use.

Its Responsible AI Guide serves as a playbook for the development and deployment of AI in healthcare, providing actionable guidance on ethics and quality assurance. Its template for an Applied Model Card is an easy-to-understand guide for specific AI use cases, adaptable for a variety of purposes.

Even in collaboration with regulators, however, a body such as CHAI might need to expand its work to comply with international guidelines for healthcare organisations. A business that operates in multiple countries might need more input than US-based business leaders and legislators can provide.

The 2024 EU Artificial Intelligence Act establishes an AI Office for member countries, but non-member nations like the UK, Norway and Switzerland can impose their own rules and regulations.

Establishing trusted third-party public or private collaborations would have at least one additional benefit: relieving healthcare providers of much of the cost and burden of reviewing, governing and monitoring AI tools on their own.

Final analysis

Establishing what it means for an AI policy to be ‘transparent’ in the healthcare industry is a meaningful step one to earning trust from stakeholders, especially patients.

Step two is communicating that policy to patients, clinicians and other stakeholders – a challenging task as rules, regulations and best practices evolve almost as rapidly as AI technology itself.

Bringing all the relevant parties to the table, some of whom might have wildly divergent ideas and concerns, and gaining consensus on standard industry practices for the responsible use of AI is not easy. However, it is preferable to a patchwork of laws and regulations that are incompatible across jurisdictions.

Putting patients first without placing additional burdens on them requires time and money from an industry strapped for both. It is, however, a necessary investment, possibly mitigated by public or private alternatives, in order to maintain trust in the long term.

Image

 Gregg Killoren is General Counsel at Xsolis