Emotional Automation Revisited

Last week we all watched in awe as the IBM computer, Watson, trounced two of Jeopardy’s finest.  This event has been much heralded but it is worth stopping for just a minute to reflect on the experience of watching Jeopardy those three nights.  I had no trouble rooting for Watson, feeling disappointed or embarrassed when he missed a question and chuckling when he displayed any behavior that seemed the least bit human.  I knew the whole time, on one level, that Watson is a computer.  On another level though, I bonded with him and felt a good deal of emotion regarding his success.

MIT Prof. Sherry Turkle recently released a book entitled Alone Together.  She was also interviewed recently on TechCrunch.  Turkle puts forth the view that technology is a poor substitute for interaction with a human being. However, she notes that when technologies (robots, relational agents and the like) respond to us, they push “Darwinian buttons,” prompting us to create a mental construct that we are interacting with a sentient being.  This brings a host of emotions to the communication including affection.  Turkle makes an argument that in the realm of human relationships this phenomenon is unhealthy for our species. 

I’d like to bring in principles from behavioral psychologist, Robert Cialdini, who has authored several books on the psychology of persuasion.  Cialdini offers simple tools that can be used in everyday life to persuade others to adopt one’s point of view.  In doing so, he lays out solid experimental evidence that these tools are effective, in most cases without the recipient being aware.  The six are: 

  • Reciprocation  (we feel obligated to return favors performed for us)
  • Authority (we look to experts to show us the way)
  • Commitment/consistency  (we want to act consistently with our commitments and values)
  • Scarcity (the less available the resource, the more we want it)
  • Liking (the more we like people, the more we want to say yes to them)
  • Social proof  (we look to what others do to guide our behavior)

Before I combine the thinking of these two individuals, remember: there is a supply and demand problem in healthcare delivery.  In the U.S., we have 24 million people with diabetes and their ranks grow at 8% per year.  One in three people over the age of 20 has high blood pressure and one in 10 over 65 has congestive heart failure.  As it stands in 2011, we already have both physician and nurse shortages.  We must be looking for ways to deliver care that do not involve intense one-on-one relationships with healthcare providers, at least for every healthcare decision that is made.

So here is my idea.  Can we take advantage of some of those “Darwinian buttons” that deceive us into believing we’re interacting with a person rather than a technology and combine them with some of Cialdini’s persuasive techniques with the hope of delivering compelling, motivational health-related messaging to individuals? 

We have evidence that this approach can be effective.  In a study conducted at the Center for Connected Health where we measured adherence to activity goals, individuals who had three times/week meetings with a computerized relational agent (courtesy of Tim Bickmore, Northeastern  University) had almost three times the adherence to their step count goals as did those in a control group.

My question is:  how unhealthy is it for us to leverage the powers of technologies such as robots to ‘push Darwinian buttons’ in the context of healthcare where we already don’t have enough providers to go around and evidence is strong that the situation will get much worse.  Can we spread our limited provider resources over more patient demand by using these technologies? Will we feel cared for and have better outcomes?

I believe that with properly architected systems, these goals can be met.