Balancing Innovation with Safety: The AI Revolution in Healthcare

Contributors:  Gil Kaminski, WG'16 and Erez Kaminski
To learn more about Gil and Erez, click here.

 

SPO_7500-2T_high_res.jpg

Fifteen years ago, I was part of a team that created MorpheusOx, an FDA-cleared, at-home sleep lab utilizing a machine learning (ML) algorithm to diagnose sleep apnea from the PPG signal of a pulse oximeter. Using ML and extensive data sets, we trained MorpheusOx to derive a patient’s respiratory signal and detect sleep apnea and cardiac arrhythmias. MorpheusOx’s reliability matched that of a sleep lab technician.

For me, the experience of developing MorpheusOx illustrated the revolutionary potential of AI/ML and also raised the question, As we push the boundaries of AI in healthcare, how do we continue to ensure safety?

Recently, the FDA released a 43-page draft guidance on the future of AI/ML in medical devices. The FDA is considering granting developers the permission to modify ML models following the initial approval of a device, provided there's an established plan for such alterations. How would that actually work, and how will device developers make sure their products are safe while being changed rapidly?

To delve deeper into these questions, I interviewed an expert in developing regulated medical software — my brother, Erez Kaminski, CEO of Ketryx, which landed $14M in Series A funding in December 2023.

Erez, tell me what Ketryx does and how you got here. 

Ketryx builds tools that enable software teams to develop FDA-regulated software faster while staying safe and compliant. We focus on enabling development teams to understand how they change their software and ensure it is done in a controlled manner.

I started my career in software development and transitioned to healthcare AI/ML. When I was the head of AI for Amgen’s medical device group, I realized how complex it was to build a medical device, especially one that is connected and has ML components. It can seem like an impossible task to regulate something like that. I built Ketryx to democratize that knowledge. We help accelerate the pace of regulated software development, reduce the development costs, and at the same time, ensure it’s being developed under the necessary regulations.

Do you see a difference between current medical devices and future ones, given the evolving role of software?

Absolutely. Medical software has traditionally lagged 10-15 years, or more, behind other fields, but advancements, even in high-risk devices, are being made. The integration of software and ML is evident, indicating a shift towards automation in various medical tasks. The future promises a significant role for software in diagnostics, treatments, and immediate response, even before intervention by a medical professional.

How do you foresee the impact on the broader medical landscape?

The aging population is already leading to an uptick in home medical device usage, and the global shortage of clinicians will accelerate the need for technology to close the labor gap. It's hard to imagine medicine in 20 years without a significant amount of automation and remote monitoring, diagnosis, and treatment.

What are the key points healthcare executives should be aware of?

  1. It’s unavoidable. Software scales, and it's going to be very effective for augmenting the limited number of medical professionals. The shift towards software automation and AI in healthcare is happening at a large scale and fast. Executives should be proactive and ready for the future; otherwise, competitors will take the lead.
  2. Building trustworthy, reliable, medical software is complex and requires immense effort. This should not be underestimated.
  3. FDA software regulation is changing significantly, and it’s important to stay informed. In the last 24 months, there has been more software guidance published by the FDA than in the last 20 years (see Figure 2).

Figure 2

What type of products are coming out in the short term?

Any medical device used at home that does not have an app is likely going to have an app. These apps will be extended by AI algorithms in later products.

I’m surprised by your answer. When I think about AI in healthcare, apps are not the first thing that comes to mind.

Building safe medical software, including apps, is complex and costly. The industry is still figuring out how to build apps while controlling costs. AI is significantly more complex and requires frequent updating. As we add AI to products, we need to reduce the development and maintenance cost of the apps the AI will be housed in.

What are the challenges companies face when trying to develop healthcare AI/ML?

  • Talent is scarce. There are few ML experts and even fewer healthcare ML experts. The medical industry is not addressing that yet. In contrast, the tech industry is fighting over ML talent and paying a lot of money. Healthcare companies should think in five-year plans, how do they develop, hire, or acquire subject matter expertise in AI/ML.
  • It's complicated to deploy at scale. Serving many patients, while ensuring high reliability with a huge amount of risk associated with errors, creates significant complexity. Medical device software complexity has been going up 30% annually since 2006, while productivity for engineers has gone up 2% a year.
  • The appropriate tools are often missing. It’s common to see tools that were created for hardware development being used to develop regulated healthcare software. This approach is suboptimal for various reasons. Consider, for instance, the differing release cycles: hardware undergoes extended cycles, while software demands regular and incremental releases.

Can we make healthcare AI/ML safe?

Yes. For example, surgical robots use deterministic models to enhance safety by restricting certain movements. While there are concerns about the reliability and safety of AI/ML systems, we've historically regulated complex products, like biopharmaceuticals, with precision and safety. Similar regulatory methods are expected for AI/ML. The main challenge is in determining specific measures of success and error for distinct models. The key will be a thorough understanding of each system's objective and expert-driven safety testing.

How do you envision generative AI testing and implementation in healthcare?

Three steps: understand how we want to use it, explore the limits of these models, and complete and publish research that shows we can monitor them. A device can be designed to restrict many system behaviors and features, ensuring it performs the specific, intended task.

Are we currently ready for a larger language model to control a system in a closed-loop fashion that can seriously injure or kill a person? No, we're very far from that. But there are lower-risk applications we can start to design and understand how to monitor.

How do we prevent HIPAA-type leaks from generative AI?

That’s a challenge. Medical records have to be used in order to train larger language models. We need to understand better how to de-identify medical records so they can be properly used to train the models. There is work taking place by the U.S. government, FDIC, and NIST (National Institute of Standards and Technology) to address that. Companies also need to do more work on data scrubbing identifying personal data and removing it in a highly controlled and regimented manner.

Contact Gil at: [email protected] | LinkedIn
Contact Erez at: LinkedIn