I was actually present, in a down-the-hall-occasional-beer-together sort of way, at the beginning baby steps of deep learning, a branch of machine learning, a branch of artificial intelligence (one of my graduate degrees). I was taking a graduate course in connectionist models and neural networks at CMU. Some students worked on neural networks with hidden layers. Several worked on recurrent neural networks, networks in which the output of the network feeds back into the network. And some of this work laid the foundations for today’s deep learning success.
Hidden layers are necessary to learn certain abstract concepts (such as Exclusive-OR). Recurrence is necessary to learn certain patterns of behavior over time (such as sentence structure, or, I speculate, workflows!). I even built neural network models of mental illness for one of my research projects. (Computer Modeling of Adaptive Depression and Asymmetric Hemispheric Processing, 1996) Some aspects of neural network optimization (that’s what learning is, in this model) resemble techniques I learned about in my industrial engineering courses.
Ultimately I moved off into other academic directions, ending up with an MS in Intelligent Systems. But I’ve kept my eye on neural network research, especially as it emerged into commercial use. More recently I’ve seen reconvergence with my interest in healthcare workflow and workflow technologies. In particular, machine learning methods are increasingly used to recognize context and predict what users need or wish to do next. Think Google Now on Android. Machine learning systems, like other data pipeline-oriented systems, resemble, in some respects, workflow management system architectures.
With that history in mind, I recently downloaded the free and open source TensorFlow (Flow!) deep learning software from Google. I’m going through the tutorials and seeing a lot of familiar ideas, except they are no longer half-baked, like they were back when I was a graduate student. I have not decided what data sets make sense for me to mess with (got one?) so I’ve been poking at the 2016 made available by (thank you very much). What would be interesting to recognize or predict? Who will be nominated? Whether they will make it into the top 100? Their likely eventual rank? What would be the inputs? Who nominated who? Characteristics of their followers and/or followers? The actual contents of the tweets? Love to hear some suggestions!
On to the HITsm tweetchat questions!
Topic 1: What jobs will be affected most by AI and Deep Learning in Healthcare? #HITsm
Immediately? Visual pattern and object recognition: dermatology, radiology
Topic 2: What types of initial diagnoses would be helpful to be made by AI algorithms #HITsm
Just like certain kinds of ECG diagnosis are almost baked into heart monitors, I think you’ll see cheap apps and hardware for skin and certain imaging tasks (ultrasound comes to mind, less risk of radiation).
Topic 3: What healthcare decisions would you be comfortable being made by AI, if a human reviewed, life or death abnormalities? #HITsm
You’ve heard of shared decision making? Between patient and clinician? Well, consider shared decision-making among patient, clinician, and intelligent systems. The classic model of shared decision making (back even before it was called that) was the Vulcan mind-meld between clinician knowledge and experience and patient values and goals (Healthcare Trade-offs, Shared Decision Making, Vulcan Mind-melds, and a Marriage Metaphor). Well, if clinical knowledge can be artificially machine learned from experience, we are now looking at a mind-meld among patient, clinician, and machine.
Figuring out and allocating competencies and responsibilities? That involves my favorite topic, workflow, as I’ve written about here.
Topic 4: If an AI could be trained to recognize body movements and facial cues, would you be willing to utilize automated visits with telemedicine followup? #HITsm
Patient monitoring is largest wearable/IoT growth area, and in principle includes gesture and facial expression recognition. Recognizing categories of human behavior can be more than just visual, it can include other sensor data as well. Increasingly, these wearable/IoT/machine learning systems will learn to recognize more-and-more abstract human states, such as emotions (affective computing) and goal, plan, and task/activity status: workflow! (Though some might say “life-flow”).
Topic 5: What type of pattern recognition could be used to enhance interoperability in healthcare? #HITsm
I frequently promote what I call pragmatic, or workflow, interoperability. Syntactic and semantic interoperability are about moving data and shared meaning. Pragmatic interoperability is about how actionable data is used. An important aspect of pragmatic workflow interoperability is goal and plan recognition. If I observe you doing something, and I recognize what you are trying to do, and I can help, that is a form of interpersonal interoperability. Something similar exists between any two coordinating entities, be they people, robots, or organizations. Deep learning can play an important role in recognizing goals and plans and then triggering helpful actions and workflows.
I’ll see you at the #HITsm tweetchat! [#HITsm chat 7.15.16] Deep learning in healthcare.