The Tone of Your Voice Holds the Secret to Moving Connected Health to the Next Level

Lately, I’ve been thinking quite a bit about what I consider to be an urgent priority — moving from the antiquated, one-to-one model to efficient, time- and place-independent care delivery. I’d like to opine about the topic here, but first need to present three bits of context to set up this post.

Think about your interaction with a doctor.  The process is as old as Galen and Hippocrates. You tell the doctor what is bothering you.  She asks a series of questions.  She gathers information from the physical exam.  Not only the obvious things, like heart and lung sounds, but how you look (comfortable or in distress), your mood, the strength of your voice when you talk, the sound of your cough (if you have one) and your mental state.  Armed with this data, she draws a diagnostic conclusion and (presuming no further testing is needed), recommends a therapy and offers a prognosis.  For the better part of the last quarter century, I’ve been exploring how best to carry out this process with participants separated in space and sometimes in time.  The main reason for this is noted in the next two bits of context, below.
There are two big problems with healthcare delivery as it works today. The first is that, in the US, at least, we spend too much money on care.  The details here are familiar to most…20% of GDP…bankrupting the nation, etc.  The second is that we stubbornly insist that the only way to deliver care is the one-to-one model laid out above.  The fact is, we’re running out of young people to provide care to our older citizens. This is compounded by the additional fact that, as we age, we need more care.  By 2050, 16% of the world’s population will be over 65, double the amount under 5.  More detail is laid out in my latest book, The New Mobile Age: How Technology Will Extend the Healthspan and Optimize the Lifespan.  We need to move to one-to-many models of care delivery.

Efficiency is a must, and one-to-one care is very inefficient. Essentially, every other service you consume — from banking, shopping and booking a vacation to hailing a taxi — is now provided in an online or mobile format.  It’s not just easier for you to consume it that way, but it’s more efficient for both you and the service provider.
If you can accept my premise that we need to move to efficient, time- and place-independent care delivery, the next logical step would be to ask how are we doing in this quest so far?
We’ve employed three strategies and, other than niche applications, they are all inadequate to get the full job done.  The most loved by today’s clinicians is video interactions. With the exception of mental health and neurological applications, video visits have a very limited repertoire.  We stumble over basic symptoms like sore throat and earache because a video interaction lacks critical information that a conversation alone can’t provide.  The second strategy is to take the interaction into an asynchronous environment, analogous to email.  This is time- and place-independent, so it has the potential to be efficient, but lacks even more nuance than a video conversation.  This modality is also limited in scope to a narrow set of follow up visits.  In some cases, patients can upload additional data such as blood sugar readings, weight or blood pressures, and that increases the utility somewhat.

The third modality is remote monitoring, where patients capture vital signs and sometimes answer questions about how they feel.  The data is automatically uploaded to give a provider a snapshot of that person’s health.  This approach has shown early success with chronic conditions like congestive heart failure and hypertension.  It is efficient and if the system is set up properly, it allows for one-to-many care delivery.
As a Telehealth advocate, I am compelled to remind you that each of these approaches has shown success and gained a small following.  We celebrate our successes.  But overall, the fraction of care delivered virtually is still vanishingly small and each of these methods has more exceptions than rules.
Remote monitoring is the right start to achieving the vision noted above.  It is efficient and allows for one-to-many care delivery.  But currently, all we can collect is vital signs, which represent a really small fraction of the information a doctor collects about you during an office visit.  So while we can collect a pretty good medical history asynchronously (we now have software that uses branching logic so it can be very precise) and we can collect vital signs, for years I’ve been on the lookout for technologies that can fill in some of the other gaps in data collected during the physical exam.  To that end, I want to highlight three companies whose products are giving us the first bit of focus on what that future might look like.  Two of them (Sonde Health and Beyond Verbal) are mentioned in The New Mobile Age, and the third, Res App, is one I just became familiar with.  It is exciting to see this new category developing, but because they are all early stage, we need to apply a good bit of enthusiasm and vision to imagine how they’ll fit in.
Res App has a mobile phone app that uses software to analyze the sound of your cough and predict what respiratory illness you have. This is not as far-fetched as it sounds.  I remember salty, seasoned clinicians who could do this when I was a medical student. They’d listen to a patient with a cough, and predict accurately whether they had pneumonia, asthma, heart failure, etc.  Res App, Australian by birth, says they can do this and have completed a good bit of clinical research on the product in their mother country. They are in the process of doing their US-based trials.  Stay tuned.  If it works as advertised, we can increase the value of that video visit (or the asynchronous data exchange) by a whole step function.  Imagine the doctor chatting with you on video and a reading pops up on her screen that says, ‘According to the analysis of this cough, the patient has a 90% chance of having community-acquired pneumonia and it is likely to be sensitive to a course of X antibiotic.’  Throw in drone delivery of medication on top of e-prescribing and we really could provide quality care to this individual in the confines of their home, avoiding the high cost part of the healthcare system.
Similarly, Israel-based Beyond Verbal has shown — in collaboration the investigators at the Mayo Clinic no less — that the tone of your voice changes in a predictable way when you have coronary heart disease.  Same scenario as above, but substitute heart disease for pneumonia.  And then there is Sonde, whose algorithms are at work detecting mental illness, once again from the tone of recorded voice.  As William Gibson said, “The future is here. It is just not evenly distributed.”
We are a ways away from realizing this vision.  All of these companies (and others) are still working out kinks, dealing with ambient noise and other challenges.  But the fact that they have all shown these interesting findings is pretty exciting.   It seems predictable that companies like this will eventually undergo some consolidation and that, with one smartphone app, we’ll be able to collect all kinds of powerful data.  Combine that with the ability to compile a patient’s branched-logic history and the vital signs we routinely collect and we can start to envision a world where we really can deliver most of our medical care in a time- and place-independent, efficient matter.
Of course, it will never be 100% and it shouldn’t be.  But if we get to a point where we reserve visits to the doctor’s office for really complex stuff, we will certainly be headed in the right direction.