How can AI win the trust of doctors and nurses?

In August 2023, the American Medical Association surveyed over 1,000 physicians about their sentiments toward AI. The results, published at the end of that year, painted a nuanced picture. While there was an obvious and undeniable undercurrent of enthusiasm, there was also an unignorable level of trepidation.

While 65% recognized the potential benefits of AI, nearly 70% expressed some level of concern. In many respects, these results didn’t surprise me. Healthcare is a highly regulated field, and for good reason. Every new drug or appliance goes through some level of testing and regulatory approval. Clinicians need to be qualified and licensed. These requirements exist solely to ensure patient safety.

But one thing that did surprise me was that an overwhelming majority of physicians said they would like some degree of input—if not responsibility—in how their practices adopt and use artificial intelligence. Just over a third—36%—said they would “like to be responsible,” whereas 50% said they would “like to be consulted.” A further 5% said they would like to be informed.

Here’s the thing: AI is already used in healthcare, from systems that transcribe and digitize notes to translation tools. Over the next decade, we will see AI play an even bigger role, at first streamlining administrative tasks, and eventually helping diagnose and treat patients.

And I’m not convinced that the biggest barrier to adoption will be purely regulatory. Sure, it can take years for the FDA—or its foreign equivalents—to approve a new treatment or medical technology. Vendors need to prove that their AI systems can coexist with existing legislation, particularly when it comes to things like patient privacy, and that their technology is safe and effective.

The biggest barriers to adoption, I believe, will come from clinicians themselves who, out of an abundance of caution, will resist AI technologies until they are convinced of their safety and their usefulness. Moreover, I don’t believe that healthcare administrators or technology companies can achieve this through a top-down approach.

Eventually, those pushing AI in healthcare will find themselves at a crossroads. They can try to disregard those concerns, dismissing them as a kind of neo-luddism, or driven by self-interest, and seek to win through attrition and perseverance. Alternatively, vendors and administrators can try and foster a fruitful relationship with clinicians, giving doctors and nurses real influence and a platform to express their concerns, and to provide info on the direction of AI healthcare technology.

It’s the latter path that will prove most successful, but how can it work in practice?

Healthcare AI’s Ground Zero

California is home to America’s—if not the world’s—most vibrant and important technology ecosystem. And so, it shouldn’t come as a surprise that many healthcare AI technologies are first deployed in The Golden State, before eventually spreading across the country, and then the globe.

But that enthusiasm is not universally shared. In April of this year, National Nurses United—America’s largest nursing union—picketed outside the Kaiser Permanente Integrated Care Experience conference, held in San Francisco. Holding signs that read “patients are not algorithms” and “trust nurses not AI,” they sought to push back on the growing adoption of AI systems that, they felt, were “unproven” and compromised patient safety.

In one survey published by the union in May, 69% of nurses said that algorithmically generated (which includes AI) assessments of patient acuity did not match the opinions of nursing staff, and failed to consider elements like “the educational, psycho-social, or emotional needs of a patient or their families.”

It also noted that these systems were seldom designed with the complicated and often hectic realities of hospitals in mind, relying on “a nurses’ ability to chart in real time, which is rarely possible when hospitals are understaffed and health care workers are overloaded with patient assignments.”

The respondents were similarly critical of AI-generated handoff notes, which are essential to ensure continuity of care when a new shift starts work. A shocking 48% said that these notes regularly failed to match their own clinical assessments and missed critical details.

Earlier this year, National Nurses United published the Nurses and Patients’ Bill of Rights—a two-page document that, in simple terms, outlined seven guiding principles for how AI should be used in healthcare. A core theme among the demands was the right for healthcare workers to exercise their professional judgements without the fear of their decisions being overridden by artificial intelligence, and the right for healthcare professionals to decide what AI technologies are used, and how.

As the founder of an AI health tech company, I wholeheartedly agree with their points. On a moral, practical, and—frankly—self-interested level, it makes sense to bring healthcare practitioners into the conversation, and to give them a level of influence and power.

It’s moral, because it’s the right thing to do. If you care about patient safety and patient outcomes, doctors and nurses need to be part of the conversation. It’s practical, because at the end of the day, clinicians will be your toughest opponents, or your fiercest advocates.

And, from the perspective of one’s own self-interest, it’s hard to build a good healthcare product if you don’t understand the realities of working in a hospital or clinic. It would be akin to asking someone who has never driven a car to design the next Tesla or BMW. The best and most successful technology companies solve problems. You can’t do that if you don’t actually understand the problem.

Building Trust Through Collaboration

In January of this year, nurses clenched the top spot in the Gallup Most Honest And Ethical Professions Poll for the 22nd consecutive year. Nurses beat veterinarians, engineers, dentists, and—of course—lawyers and politicians. Broadly speaking, healthcare professionals were seen as the most trustworthy members of society.

Compare that to how people perceive artificial intelligence. According to the 2023 Bentley-Gallup Business in Society Report, 79% of Americans don’t trust companies to use AI responsibly. The 2023 MITRE-Harris AI Trust Poll showed that only 48% of Americans believe AI is safe and secure.

I mentioned this because it illustrates the challenges that lie before AI vendors and healthcare administrators. They are asking the most trusted profession—one where people routinely make life-or-death decisions—and asking them to put their faith in a sector of the technology industry that most people have deep concerns about.

And I would further argue that the challenge becomes even more complicated when you consider that the stakes are higher for clinicians. If a nurse or a doctor screws up, they could lose their license, be sued, see their liability insurance premiums skyrocket, or—in some cases—even go to prison.

Tech companies—and their founders—only experience that level of risk when there’s demonstrable fraud, as with Theranos founder Elizabeth Holmes. The Silicon Valley understanding of failure—where it’s something to be celebrated as a painful, though necessary step toward success—doesn’t apply in healthcare, nor should it.

The only solution—really, the only way to build that trust—is to put nurses and other clinicians at the heart of any decision about how AI is used in healthcare, and how these AI tools are built.

Vendors need to demonstrate, to an unassailable level of confidence, that their products work and are safe. If they approach this process with hubris, or see their problem as one of persuasion, they will fail. Smooth words and PR spin are not enough. Rhetoric isn’t enough. Nurses and doctors will—and should—ask difficult questions, and the only correct answers are those that address the heart of the matter.

Most painful of all, vendors need the courage to recognize when a clinician makes a valid point and be willing to return to the drawing board.

This collaborative process should begin early, and preferably while the product is being designed and developed. At my company, we made a conscious decision to hire doctors, nurses, and other allied medical professionals—both practicing and former—from the very beginning, and their input has proven to be invaluable.

We were also conscious of creating our own filter bubble, relying too heavily on our own staff, so we routinely seek the feedback of others.

This is a model that has worked for us, and it is one that, I believe, should be replicated in other AI health tech companies.

I am an optimist, and I do believe that the pervasive shroud of AI skepticism in healthcare will eventually ebb, particularly as AI proves itself in “low-hanging fruit” tasks, like notetaking and administration. I think trust will eventually come naturally, just as it did for other technologies in healthcare that were once resisted.

I also recognize the grand aspirations of AI health tech vendors, and their desire to see AI play a pivotal role in monitoring, diagnosing, and treating patients. And for us to cross that Rubicon, we need to actively engage with healthcare professionals.

Nurses and doctors aren’t our adversaries, but potentially, our most useful collaborators.

No comments

Read more