Artificial Intelligence: A Clinician’s Friend or Foe?

I am privileged as a regular contributor to a local executive education course in healthcare innovation. I give these talks 4-5 times per year, and it has been fascinating over the last 18 months to see how the questions that come from audience members have changed. The individuals that participate in this course are all accomplished and seasoned, and the movement of the course to a virtual format a year ago has allowed for a significant increase in international attendees. It feels like a snapshot of how the world is thinking about healthcare innovation on a given day.


Joseph C. Kvedar, MD, giving a talkMy most recent talk was on August 6. I always talk for a few minutes about artificial intelligence (AI) as it represents such a vital part of the future of healthcare delivery. My traditional approach is to talk about how impressive it is that today’s computers can sort through enormous data sets, cull out patterns and use them to predict the future. As human beings, we are particularly weak in this. I also mention that computers have shown no ability to achieve human traits such as judgment, caring, and emotional intelligence. I gave a TEDx talk on this topic a couple of years back.


The TEDx talk was motivated by the hue and cry of many healthcare providers at the time, who made the case that AI was a threat to our livelihood. There was a notion afoot that if AI systems could make diagnoses, we physicians would be out of a job. A seminal paper in dermatology (my field) was published early in 2017, showing that an AI algorithm could diagnose melanoma as well as or better than a dermatologist. That shook up the field of dermatology a bit.


So, I guess I should not have been surprised when one of the attendees at the talk I gave in August asked a question about whether doctors should be afraid of their jobs because AI is coming. I answered politely (echoing my thoughts above) that computers can’t do judgement, caring, etc. He persisted, telling of a visit to the HQ of a large consumer electronics company based in Asia. He witnessed a system that could diagnose near-sightedness and prescribe glasses, all without a person’s intervention.


That got me thinking more deeply about this issue. Many systems indeed exist that enable radiologists to triage images before a human reviews them. In addition, the product IDx-DR enables a non-ophthalmologist to triage patients with suspected diabetic retinopathy to an ophthalmologist. The latter is both FDA-approved and has a CPT reimbursement code.


I suppose that some clinicians (I say clinicians because the story about near-sightedness would be the province of an optometrist) have been able to eke out a living by simply mechanically making diagnoses and prescribing related therapies. This reduces healthcare to a sort of if/then type of logic. For some healthcare problems and some clinicians, I suppose this is relevant, and those individuals might be at risk for having some of their revenue replaced by AI systems. Without sounding too judgmental, I would point out that clinicians train for years to develop higher cortical skills that enable us to recognize nuance and apply judgment based on experience. If you have put that aside and chosen to think of your job as a series of primitive if-then statements, well…


Another controversy in the world of AI is explainability. The logic goes that more people will be comfortable with guidance provided by AI if they only could understand how these systems are making decisions. That one makes me chuckle too. When I was an intern (1983) working in the cardiac care unit, one of the revered cardiologists used to make predictions about the fate of patients in the unit by examining their earlobes. Scarily, he was right most of the time. We didn’t demand explainability of him; instead, we respected what we understood to be experience and judgment.


It is just hard for us humans to think that machines might do some tasks better than we can. But they can, especially when recognizing patterns in large data sets and using them to make predictions.


Reposted with permission from Piction Health, where I serve as an Advisor.