Cognitive Technology for Goal-Driven Healthy Habits: An Intelligent Systems Approach

I am truly delighted to do my research in preparation for today’s #HITsm tweetchat with . I have such fond memories of her marathon Blabs (3, 4, 5 hours?). The topic of today’s chat came up frequently: How Technology Helps and Hurts Healthy Behavior Change.

I usually introduce myself an industrial engineer who went to medical school, hence my interest in healthcare workflow and workflow technology. I don’t often mention I when to medical school and then studied Cognitive Science, which was one half of my MS in Intelligent Systems (the other half being Artificial Intelligence). The CogSci portion included psychology, linguistics, philosophy, and neuroscience. As a graduate student, playing my way, I worked on computer models of aphasia, dementia, and depression. I even spent a week studying with the man who founded of cognitive therapy, Aaron Beck, in Philadelphia.

I took a look at using tech to change patient behavior from the point of view of cognitive scientist. In doing so I hit on this paper (full text freely available!), Habits, Action sequences, and Reinforcement Learning. It summarizes and synthesizes a number of topics I studied decades ago. Topics I feel are relevant to using tech to move human behavior away from unconstructive toward constructive.

Believe it or not, (and I suspect you’ll believe it, given my workflow-centric reputation) there is a workflow angle. A workflow is a sequence of actions, consuming resources, achieving goals. Humans evolved from more basic animals. These animals exhibit, what may be thought of as, instinctive workflows. A fixed action pattern is species-specific characteristic sequence of behaviors (actions), which, once triggered, runs to completion. For example, if an egg is displaced from a nest, certain geese will roll the egg back into the nest, even if the egg somehow magically disappears. They continue to maneuver the imaginary egg back into the nest. The animal kingdom is rife with FAPs. We even know a lot about the neural networks that generate FAP behavior.

What do FAPs have to do with human behavior? Well, FAPs are a lot like habits, a sequence of behaviors, automatically executed, in the presence of some “releaser”. They happen automatically, seemingly without purposeful or mindful control. Of course, unlike FAPs, human habits are not instinctive. Through a variety of techniques, we can break old habits and create new ones. However, doing so is difficult! This is where technology comes it.

However, before we get to how technology might be useful in this respect, it’s useful to have a model of what is going on inside our heads. The degree program I mentioned, Intelligent Systems, viewed robots, software artificial intelligences, humans, and even some animals, as “intelligent systems” that, to vary degrees, shared certain properties and characteristics, including perception, memory, action, reasoning, and learning. Further, intelligent systems research combined techniques from cognitive science (psychology, linguistics, neuroscience, philosophy) with artificial intelligence and machine learning, to actually create computer simulations of these intelligent agents, to better understand them. We’d create software simulating them, and then we’d conduct experiments, comparing their behavior in response to manipulated environment stimuli, and to intelligent agents in the real world. Sometimes we’d even “break” the intelligent agents, to try to simulate mental and neurological disease. As I mentioned previously, I worked on a variety of such projects, from aphasia (language difficulties), dementia (memory, reasoning, personality), and depression (where I actually published a number of papers!).

All of that, and it is a lot of personal history, is backdrop for what I will do next, which is describe our human mind as if it is computer simulatable intelligent systems, with an eye toward thinking about changing bad habits into good habits.

The Habits, Action Sequences, and Reinforcement Learning paper describes an intelligent system in which there are two complementary but also competing information processing modules. One module is “closed-loop” meaning it has a model of the world and in that model of the world it behaves (acts on its world) to move the world toward a preferred goal state. The perceiving-reasoning-acting loop is closed in the sense that the difference between the current world state and the preferred goal state is continually fed back to the intelligent system so it can continually chose actions that will eventually achieve it’s goals.

Contrast above with the second behavior module. This module is similar to a Fixed Action Pattern. It has a set of “hardcoded” workflows, sequences of behavior, which, once triggered, execute from beginning to end, without reference to whether they move the world from a bad (less preferred) state to a good (more preferred) state. The great thing about these automated personal workflows is they are fast, consistent, and require no thought. The bad thing about these automated personal workflows is that they are fast, consistent, and require no thought. If you change the environment, “good” habits can become “bad” habits.

The two systems can profitably work together. Once one’s environment changes, fall back on the closed loop thoughtful goal-oriented behavior. Over time find new personal workflows that work, then turn them into open loop fast, consistent, and “thoughtless” workflows. This frees up the closed loop goal orient system to focus on other, higher level, more strategic issues. Also, you can think of an intelligent agent has having different bundles of related workflows for different environments. As it move through these environments, different clumps of workflow potential become active. Let’s suppose an intelligent agent has about a dozen different environments it frequently or occasionally needs to navigate. Eight or nine may be stable and the current open loop personal workflows are perfectly appropriate. However, several environments may be problematic. So our closed loop problem solving systems focus there. Over time, as all of our different occasionally frequented environment change, each is dealt with in turn, converted from open loop to closed loop and back to open loop personal workflows. But imagine if all your environments change at once! That is indeed stressful and even your wonderful dual system, open and closed system partnership, can be overwhelmed!

On a moment-by-moment basis, current thinking is that these two, open loop and closed loop, modules compete with each other. Consider the following quote:

“some have suggested that these processes may compete for access to the motor system…. in which the goal-directed and the habitual systems work in parallel at the same level, and utilize a third mechanism, called an arbitration mechanism, to decide whether the next action will be controlled by the goal-directed or the habit process”

So, now let’s think about how technology might be used to help these two, open loop and closed loop, systems work together.

Let’s consider the open loop personal workflow system. How might we extinguish is highly automated responses, in preparation for instituting new, healthier responses?

  1. Prevent the workflows from being triggered in the first place.
  2. Detect when the workflows are executing and disrupt them.
  3. Emphasize the negative consequences of these workflows running to completion.

This last device is interesting because it is essentially attempting to convert open loop behavior into closed loop behavior.

I can imagine technology being used in all three ways.

  1. Don’t go there! (You know what always happens if you do…)
  2. Look! Squirrel!
  3. Ouch. Be honest with yourself. That hurt! (But also be constructive, give yourself a brief scold, and lay plans to avoid triggering similar future behaviors, or at least figure how to stop one if it get started.)

At the same time we are trying to hobble destructive open-loop personal “workflows,” we need to enable constructive closed-loop personal workflows.

  1. Make the future preferred world goal state particularly vivid.
  2. Figuring our how to solve new problems, or old problems in new ways, is hard. Provide help.
  3. Once you find a tentative solution, capture it! Institutionalize it in some way, to make it more the more likely to execute open loop behavior than the old destructive open look behavior.

Regarding the arbitration mechanism, both the open loop and closed loop personal workflow system spring into operation, race along in parallel, and then demand that they be given control. In this last regard, a basic insight is this. One way to become more “meta-cognitive” is to have some sort of model of yourself. This model can be used to explain and understand, and to guide what to do. I think this model of you and intelligent system in eminently teachable and learnable. In fact, cognitive theory works a bit like this. One of its goals is get you to think like a “personal scientist”. Scientific thinking involves weighing evidence and conducting experiments. Simply viewing yourself as a “scientist” is itself esteem elevating. I think something similar might be true of viewing yourself as an intelligent system.

Anyway, back to what technologies could be useful.

The stimuli that trigger personal workflow are often spatially and temporarily circumscribed and specific. Here wearables and the Internet of Things can be the eyes and ears of a system to detect you may be heading into a bad workflow stimulus rich environment. If a bad workflow can’t be avoided, and starts to execute, workflow execution itself can be detected. (This is currently an active area of artificial intelligence and machine learning research, recognizing which goals, plans, and workflow of an intelligent system are currently active.) Once the bad workflow is detected, mid-execution, send notifications, call someone to call you, ring the fire alarm, whatever it takes (no, don’t ring the fire alarm unless there is a fire, but you know what I mean!)

And if, heaven forbid, that bad-bad-bad personal workflow can’t be prevented… document it. And do so in such a way that the next time it can be held up and waved in front of the intelligent agent… NO, YOU REALLY DON’T WANT *THAT* TO HAPPEN AGAIN, DO YOU?

Relative to closed-loop problem solving and workflow creation, preferred workflow goal states might be vividly representing using virtual or augmented reality. (THIS is what you’ll look like in that bathing suit, this is what will feel like when you walk across that graduation stage!)

Relative to helping to find new workflows that work, that’s what many workflow and task management systems do. They help manage potentially useful tasks, to string them into candidate workflows, and then, when executed, keep track of state (success, in progress, timed out, failed, escalated, etc.)

Finally, one you find new workflows that work, you need to move that insight and actionability down into the system that senses whether you are in danger of executing one of the bad-bad-bad workflows, and offers a different, more constructive workflow instead. Increasingly, every single digital device we interact with is aware of each other and work together. They will talk to your fridge and your minibar. They will, if necessary, act on your behalf, perhaps even stepping to literally prevent you from doing what you are about to do.

Yeah, scary. But also, possibly, fascinating, in positive and constructive sense.

A lot of the technologies I just listed already exist in bits and pieces. Some are already being woven together, to act in purposeful and useful manner, at our behest, to help break and make personal workflow habits. In a sense, there will be (at least) two intelligent systems: you and the system you create around you.


On Periscope!

Leave a Reply