Headlines, Heuristics and Subtlety in Interpreting Connected Health Studies

Headlines1
We live in a headline/hyperlinked world.  A couple of years back, I learned through happenstance that my most popular blog posts all had catchy titles.  I’m pretty confident that people who read this blog do more than scan the titles, but there is so much information coming at us these days, it’s often difficult to get much beyond the headline.  Another phenomenon of information overload is that we naturally apply heuristics or short cuts in our thinking to avoid dealing with a high degree of complexity.  Let’s face it: it’s work to think!
In this context, I thought it would be worth talking about two recent headlines that seem to be set backs for the inexorable forward march of connected health.  These come in the form of peer reviewed studies, so our instinct is to pay close attention.
In fact, one comes from an undisputed leader in the field, Dr. Eric Topol.  His group recently published a paper where they examined the utility of a series of medical/health tracking devices as tools for health improvement in a cohort of folks with chronic illness.  In our parlance, they put a feedback loop into these patients’ lives.  It’s hard to say for sure from the study description, but it sounds like the intervention was mostly about giving patients insights from their own data.  I don’t see much in the paper about coaching, motivation, etc.
If it is true that the interactivity/coaching/motivation component was light, that may explain the lackluster results.  We find that the feedback loops alone are relatively weak motivators.  It is also possible that, because the sample included a mix of chronic illnesses, it would be harder to see a positive effect.  One principle of clinical trial design is to try to minimize all variables between the comparison groups, except the intervention.  Having a group with varying diseases makes it harder to say for sure that any effects (or lack of effects) were due to the intervention itself.
Headlines3
Dr. Topol is an experienced researcher and academician.  When they designed the study, I am confident they had the right intentions in mind.  My guess is they felt like they were studying the effect of mobile health and wearable technology on health (more on that at the end of the post). But you can see that, in retrospect, the likelihood of teasing out a positive effect was relatively low.
The other paper, from JAMA Internal Medicine, reported on a high profile trial for congestive heart failure, which involved using telemonitoring and a nurse call center intervention after discharge.  This trial included a large sample size and was published in a well-respected and well-read journal.  On initial reading, it was less clear to me why they did not see an effect.  I had to read thoroughly – way beyond the headline — to get an idea.  The authors, in the discussion section, provide several thoughtful possibilities.
One that jumps out to me is that the intervention was not integrated into the physician practices caring for the patients.  In our experience with CHF telemonitoring, it is crucial that the telemonitoring nurses have both access to the physician practices and the trust of the patients’ MDs.  Sometimes a simple medication change can prevent a readmission if administered in a timely manner. This requires speedy communication between the telemonitoring nurse and the prescribing physician.  If that connection can’t be made, the patient may wind up in the emergency room and the telemonitoring is for naught.
It is also fascinating that the authors point out that adherence to the intervention was only about 60%.  This reminds me of another high profile paper from 2010 that came to the conclusion that telemonitoring for CHF ‘doesn’t work.’  I blogged on that at the time, pointing out that their adherence rate was 50%.  In both cases, with such low adherence, it is not surprising that the effect was not noted.
In our heart failure program, adherence is close to 100%.  As a result, our readmission rate is consistently about 50% (both all cause and CHF related) and we showed that our intervention is correlated with a 40% improvement in mortality over six months.  The telemonitoring nurses from Partners HealthCare at Home cajole the patients in the most caring way and patients are therefore quite good at sending in their daily vitals.  If they don’t, the nurses call to find out why.  Our program is also tightly aligned with the patients’ referring practitioners.  I suspect these two features are important in explaining our outcomes.
A prime example of how these study headlines can derail the advancement of connected health, was captured in an email I received the other day from my good friend Chris Wasden.  Referring to the JAMA Internal Medicine study he said, “Our docs are using this research to indicate they should not waste their time on digital health.”
Perhaps a spirited discussion over some of these nuances may change some minds.
Headlines2
And that leads me again to the concept of headlines and heuristics.  How could ‘telemonitoring’ in CHF lead to such disparate results?  Is our work wrong? Spurious?  I don’t believe so.  Rather, I think we’ve collectively fallen into a trap of treating ‘mobile health’ and ‘telemonitoring’ as monolithic things when, as you can see, these interventions are designed quite differently.
I believe we are susceptible to this sort of confusion because of applying a heuristic. We are used to reading about clinical trials for new therapeutics or devices.  A chemical is a chemical and a device is a device.  In a pure setting, when applied to a uniform population of individuals, a chemical either has an effect or not.  Connected health interventions are multifaceted and complex. Thus the apparent contradiction that telemonitoring works in our hands but not in the recent JAMA Internal Medicine paper.
My conclusion is that the next phase of research in this area should move away from testing technologies. Instead, we should focus on teasing out those design aspects of interventions that predict intervention success. Now I think that’s a good headline!
I’ll start out by offering two hypotheses:

  1. 1. mHealth interventions that are separate and distinct from the patient’s ongoing care process are less likely to be successful than those that are integrated.
  2. If adherence to a program is low, it will not be successful. Early phase, pre-clinical trial testing of interventions should include work to fine tune design features that promote adherence. Chapter 8 of The Internet of Healthy Things offers some ideas on this.

As I said five years ago, I’m not sure intention to treat analysis is the right way to evaluate connected health interventions. If patients are non-adherent to the intervention, is it any surprise that they don’t respond? I’m having trouble wrapping my head around that one.