Would you trust AI with your health?


Questions like, “Would you trust a robot to…” “…drive you around” or “…make investment decisions for you” or “…recommend medications for you” are only going to become more common as AI becomes more “thoughtful” and learns to make more relevant and personalized decisions.

The questions about AI in health care feel like some of the most consequential and personal, which is why now is a great time for technology companies and the general public to speak in plain English about the encouraging possibilities — and the potential downsides. Let’s look at the inroads AI is making in the health care industry and why we might need to temper our excitement with at least a modest dose of skepticism.

How much are we spending on AI applications for health care?

Lots of companies are betting lots of money on artificial intelligence as the future of health care. Future numbers for AI spending in health care, according to Accenture, could be staggering. Their analysis indicates a likely global total of $150 billion by 2026, broken down into the following categories:

  • Robotic surgery — $40 billion
  • Virtual nursing assistants — $20 billion
  • Administrative assistance and automation — $18 billion
  • Automated fraud detection — $17 billion
  • Dosage error reduction — $16 billion
  • Smart identification for clinical trial participants — $13 billion
  • Preliminary diagnosis — $5 billion
  • Image-based diagnosis — $3 billion

Other estimates corroborate these findings and indicate a 40 percent or higher compound annual growth rate between 2017 and 2027 for most of these applications.

What are the advantages of AI in the health care industry?

Health care is especially susceptible to tradeoffs between affordability and effectiveness. And yet, AI stands a good chance at making these services more accessible, affordable, and capable of delivering better, more timely, and more personalized results. For health care providers, the goal is to cut costs — but using AI to do so could also make them better stewards of human health and wellness.

Let’s look at some of the areas seeing the heaviest investment to get a sense of what AI brings to the table. Health care is easily one of the most important, controversial and expensive human enterprises — but AI could usher in more inclusive and effective services in a variety of ways:

  • Robotic surgery: Surgical tools enhanced with AI will have the ability to intelligently incorporate pre-op patient records to help plan and carry out minimally invasive and highly effective surgeries. Instrument-wielding robotic arms powered by AI can guide surgeons’ hands during a procedure and help bring down recovery times.
  • Virtual nursing assistants: Whether patients receive care at home or in assisted living centers, seniors and others who struggle with daily care, taking medications on time, or assessing changes in their condition need responsive and personalized care. For rural patients, timely intervention is even more difficult — and nursing staff is at a premium almost everywhere. Virtual assistants can help avoid unnecessary caretaker visits, ask personalized questions to gauge recovery or improvements over time, and alert hospital staff if the patient’s condition takes a turn.
  • Administrative automation: A considerable amount of the time and expense in health care involves back-office tasks like writing and transcribing charts, notes, and patient health records; ordering tests; and avoiding (and checking for) errors while organizing files for hundreds or thousands of patients. AI-powered assistants could greatly reduce the administrative burden so health care workers can focus on rendering more, and better, care.
  • Preliminary and image-based diagnoses: The humble smartphone is already proving what’s capable when AI gets involved in the diagnosis process. Some application developers boast of being able to identify respiratory conditions based on the sound of the patient coughing. Others can use photographs of a patient’s skin to identify skin conditions and cancers, and still others use MRIs, X-rays and even children’s health records to diagnose genetic conditions that might otherwise have flown under the radar.

This barely scratches the surface of the AI applications under active development. For example, at a time when drug companies are under closer scrutiny than ever for price-gouging and complicity in the opioid epidemic, AI could act as a check against unnecessary prescriptions, help identify and test new pharmaceuticals more quickly, and cast a wider net for qualified candidates in drug trials. This could also be a way to combat prescription drug abuse and encourage people to enter a drug rehab program.

Machine learning is the process that “trains” artificial intelligence and makes it “smarter” over time. The beauty of AI is that it becomes better able, over time, to draw meaningful and accurate conclusions. Every patient record included in the analysis makes the next diagnosis that much more accurate.

The effect of all of this, ultimately, is more personalized care for each patient. So what’s the downside?

Are there ethical or practical considerations to worry about?

Data security is one of the biggest practical considerations here — and one of the largest sources of worry for health care systems.

With only 22% of industry professionals saying their healthcare organization follows best cybersecurity practices, it’s easy to see that healthcare cybersecurity is far from perfect. Cybersecurity experts agree that medical data is more valuable on the Dark Web than raw credit card numbers. Data thieves can forge prescriptions, place orders and make fraudulent insurance claims using stolen patient data. Because data is the lifeblood of each of the AI systems described above, that makes them particularly tempting targets for cybercriminals. Turning over significant portions of our health care apparatus to data and intelligence systems significantly raises the bar when it comes to keeping patient records safe from prying eyes.

Trusting health care companies to keep patient data safe is one thing. What about trusting the machines themselves?

In one survey, one-quarter of respondents said they would not trust artificial intelligence with their well-being, citing worries over how well an algorithm could actually “get to know them,” compared with a human physician.

Physicians and other health care decision-makers have worries of their own. Tech companies large and small, all over the world, believe they’ve found their own AI silver bullets for tackling diseases or shortcomings in our current approach to health care. But as with any venture of this magnitude, the science must come first — and that requires thorough clinical and peer-reviewed validation of any claims made about the effectiveness of an AI application.

In another survey, nearly one-third of physician respondents said they worried about the potential for “medical errors” in health care systems powered by AI decision-making. Even so, half of the respondents in that same study expressed confidence that AI would achieve real staying power in the health care industry.

The AI ‘Black Box’ and health care

Even with all of this potential for AI in medicine, more than one authority on the subject of AI has expressed dismay that the inner workings of these intelligent algorithms closely resemble a “black box.” This is to say they often demonstrate uncanny accuracy and draw actionable conclusions, but the logic underpinning those decisions is often poorly understood, even by its designers. This cuts to the heart of our ongoing trust issues with AI. Moreover, a dismaying number of case studies reveals that AI seems to reproduce, rather than ameliorate, some of the more common human biases.

Even with these caveats noted, a measured, deliberate, and unimpeachably scientific method for developing and deploying AI in health care could be just what the system needs to deliver more care, to more people, more affordably than ever.