Growth of the Evidence Base for Digital Health is Crucial for the Adoption

“If current evidence is only followed by clinicians 60% of the time, why is it so important that we have good evidence for our interventions?” 

This was a question posed to me by my friend Bleddyn Rees as we prepared for a talk I was recently invited to give at the Digital Health Society Summit (where Bleddyn was the chair).  The initial idea was to focus on my role as Editor in Chief at npj Digital Medicine  (a Nature Research Journal) and explain why non-scientists should care about the evidence showing that various digital health interventions are effective in improving patients’ lives (or not).

I am often reminded of a quote attributed to David Sackett, often referred to as the “father of evidence-based medicine,” who said that when you graduate from medical school, half of what you know is wrong, but you don’t know which half.  I have lived through that in my career and can cite many examples of things we learned that have since been debunked in one way or another.

Thus, an ongoing examination of the logical underpinnings of clinical practice is healthy for the discipline of medical care and leads to better care decisions, improved lifespan, and better quality of life for our citizens. 

There continues to be significant skepticism regarding the effectiveness of digital health interventions, especially among clinicians.  When the American Medical Association surveyed its membership in 2016 regarding the slow adoption of digital health, four challenges came to the fore:  evidence, reimbursement, fear of liability, and workflow integration.

So, if you are building digital health products, you will likely need to convince physicians who care about evidence to use them. 

The challenge is that innovators/founders and managers of early-stage companies want to move as fast as possible to capture market share and build value.  Some of this is a simple entrepreneurial mindset, and some is pressure from early-stage investors who have logical reasons to want to see timely returns on their investments.

Evidence gathering is both time-consuming and relatively expensive.  Both facts discourage founders who want to move fast and use cash sparingly.

It is tempting to cut corners in evidence gathering. Many founders come from the tech world, where market research (focus groups, user experience surveys) and tools such as A/B testing can be done quickly and efficiently.  Digital health is different for the reasons noted above.

Whenever I have this discussion with non-scientists, two counterarguments usually come up:  1) common sense should be able to guide us, and 2) observing our interventions in the real world should be enough to generate confidence in their efficacy.  Let me address both of these briefly here.

For entrepreneurs to suggest that common sense can be our guide for adoption may be akin to letting the fox guard the henhouse.  We are all biased in many ways that can cloud our judgment. Of course, founders, managers, and investors have significant conflicts of interest that will also interfere with our objectivity.

Observational studies are great for generating hypotheses to study later.  They cannot be thought of as definitive, though, for many reasons.  Over time, groups of people get more homogenous. This is called regression to the mean.  Let’s say we have an intervention for type II diabetes, and we want to follow HbA1c to measure its effect on patients with diabetes.  If you simply follow a group of patients with type II diabetes over time, the sickest will, on average, get somewhat better, independent of any intervention applied. Thus, we could conclude that our intervention was effective when it was simply a function of regression to the mean.

Another issue that is prevalent in digital health studies is learning bias.  In drug clinical trials, we give some patients a placebo and some the true intervention; nobody knows which.  Digital health interventions are much more multifaceted and complex, so it is impossible for a patient not to know which intervention they received.  This Hawthorne effect can skew the results of observational studies so that the intervention appears much more effective than it is.  

Finally, there is the trap of correlation not being equivalent to causation.  There are so many fun examples of this in the history of science.  One of my favorites is that we refer to the weeks between early July and mid-August as the “Dog Days” because the Romans observed that the dog star, Sirius, burns brighter during those weeks. They concluded that this star provided the heat that made those weeks hotter than others.  Somehow it did not occur to them that the sun could be providing the radiant energy that is the cause of hot summer days.

By now, I hope I’ve convinced you that proper clinical research is essential in digital health and that common sense and observations are not good guides for documenting the truth.

In my role as Immediate Past Chair and Senior Advisor at the American Telemedicine Association (ATA), we often address myths or falsehoods about digital health and its effectiveness. For this reason, the ATA is reestablishing the Center for Applied Research in Telehealth (CART). I’m pleased to be heading up this effort. With initial funding from the David M.C. Ju Foundation, we will soon launch our first publication outlining best practices, current laws, regulations and policies, and infrastructures to optimize care delivery via telehealth. In the meantime, in my next post, I’ll dig deeper into the pros and cons of different types of clinical research and share some examples from the recent literature that make a big difference in telehealth adoption.