Last updated: July 28th, 2020
When “one giant leap” echoed around the world in the summer of 1969, Neil Armstrong’s words marked not only a profound achievement for mankind but also heralded the first digital age1. Now 50 years on from that triumph of shared vision, leadership, and technology, what can the moon landing teach us about the future of healthcare innovation?
If artificial intelligence (AI) is the next technology frontier, then healthcare is surely one of its many orbits. Like the moonshot, the stakes are high. AI promises to transform healthcare by producing better clinical outcomes for lower costs. As proof, Accenture predicts that AI applications will save over $150 billion by 2026; and AI investment throughout the public and private sector is expected to reach $6.6 billion by 2021.
Yet when it comes to AI’s big launch, the healthcare sector’s countdown seems to be on hold at T-minus seconds and counting. For every successful use of AI in healthcare, there are setbacks that prevent organizations from accepting AI applications into mainstream use. Apollo’s successful lunar landing was the result of knowledge and experience gained from failures during the Mercury and Gemini missions. One such lesson in healthcare AI is expecting clear and reliable result findings from “bad” data. Data is “bad” or is of poor quality when it is contaminated, inaccurate, contains duplicates, out of date, mismatched, missing values, biased, or is untrusted. Poor data quality costs organizations millions per month. According to Brian Dooley of TWDI Research, the machine learning algorithms applied in many healthcare AI use cases tend to operate as a “black box“, within which biases contained in the data might never come to light.
AI healthcare hype is well documented, and we are still wading through its . reveals that AI is more about improving the patient-doctor relationship by liberating the physician from the tasks that interfere with human connection than it is about the technology. In this world, your smartphone app won’t replace your doctor, but an app may be prescribed to perform clinical observations for your physician to review. AI projects that adjust trajectories to reference Dr. Topol’s criteria as their north star will have the best chance for implementation.
AI use cases include robot-assisted surgery, clinical trial participant identification, clinical variation management, and automated image diagnosis, to name a few. Machine learning (ML) is a component of AI that uses techniques and analytical algorithms to extract features from structured data. Data sources used by ML include EMRs, imaging, and other clinical observation systems. ML techniques come in three categories: supervised, unsupervised, and semi-supervised.
Unsupervised learning is well known for feature extraction. Clustering and principal component analysis (PCA) are two major unsupervised learning methods. Clustering groups subjects with similar traits together, without using the outcome information (for example, understanding the clinical variation of a data set of 1,500 patients treated for pneumonia).
Supervised learning is suitable for predictive modelling by building relationships between the patient traits (as input) and the outcome of interest (as output). For instance, a care pathway is created using the results of unsupervised learning from the pneumonia clinical variation use case as input. Most supervised learning is used as an AI application, as it finds the clinical context of the preprocessed data sets from unsupervised learning. For example, a clinical team reviews the patient clusters identified in the unsupervised learning step to explain what procedures were key to achieving better outcomes.
Semi-supervised, proposed as a hybrid between unsupervised learning and supervised learning, is suitable for scenarios where the outcome is missing for certain subjects.
All three of these approaches require high quality data to describe the patient traits, observations, procedures, and other factors, including social determinates of health.
So while the computer technology developed for the Apollo, Mercury, and Gemini missions in the 1960s was primitive, it proved to be enough. For example, the Apollo Guidance Computer (AGC) that steered the intrepid crew to their lunar destination was no more powerful than a contemporary calculator. Moreover, today’s smartphones perform tasks millions of times faster than Apollo-era computers.
How did we achieve something as complex as the moonshot without today’s computing power? NASA’s teams of engineers designed the AGC to perform a limited set of specific functions for guidance and control of the command and lunar modules. Building a solution that is fit for purpose contributed to Apollo’s success. Herein lies the lesson.
The most successful AI/ML projects have:
- Leadership with a clear mission of the objectives for the application
- High quality data that meets use case cohort requirements
- AI/ML methodology for use case (supervised, unsupervised)
- Clinical team involvement and commitment
AI in healthcare is riding a steep curve of progress. We are still on the launchpad, working towards our moonshot. Like the new space missions being planned for a Mars landing, healthcare will set new goals in a continual growth and development cycle.