Contributor: Vidya Murthy, WEMBA’42
To learn more about Vidya, click here.
Unfortunately, blame is pervasive in healthcare and cybersecurity. It should therefore be unsurprising that a common retort in healthcare cybersecurity is “people are the weakest link.” It’s true in one sense - healthcare leads other industries with 31% of cybersecurity-related breaches being attributed to human error. However, the question must be asked, “If we blame patients for not adhering to treatment plans, and we blame people for cybersecurity problems - maybe we’ve built systems that don’t work properly?”
Now, we are all human, and we all make mistakes. But given how often people fall victims when they’re in the healthcare system, it seems a bit of self-reflection would be beneficial.
Why is Healthcare a Target?
Clinical care is the top priority for all who work in healthcare. This means, as much as possible, we don’t want to introduce barriers to the delivery of care. How does that impact cybersecurity? If a healthcare delivery organization (HDO) wants to keep things as they are, that may mean a reluctance to update software. Delaying or not updating software on a device can eventually mean the software is outdated and potentially vulnerable to cyberattack.
But why target a medical device? The idea of a blood pressure or ECG reading doesn’t exactly bring dollar signs to mind.
A hacker can potentially exploit a device’s vulnerability as an entrance point into a HDO network to then deploy a ransomware campaign. This will compromise an HDO’s network, inhibiting its ability to update electronic health records and use devices that rely on connectivity for making calculations (such as devices used in radiation oncology and sophisticated surgical robots).
While this may seem like a delay in the delivery of elective procedures, it can also result in a re-routing of patients who have emergent needs. Research shows a 13.3% higher mortality rate for patients experiencing an acute myocardial infarction or cardiac arrest who experienced a delay in care of only four minutes, which was attributed to a marathon taking place that day. When applying this finding to a delay in care due to a network takeover by hackers, one can imagine an increase in mortality rates far greater than 13.3%.
Furthermore, HDOs regularly obtain patient social security numbers (SSN) or insurance numbers, which can be relevant for billing purposes, or in an attempt to share data between HDO systems. This same data can be used by a malicious actor to commit requests for loans, prescriptions or insurance claims, open bank accounts, perform online transactions, and even take out a mortgage, file tax returns, or claim rebates. Imagine the SSNs from a pediatrician’s office being stolen and sold with the resulting fraudulent activity going undetected for a prolonged period (likely until the minor reaches adulthood), or the SSN of a deceased person that can be used with zero concern for active monitoring by the individual.
Why Doesn’t the Current Strategy Work?
First and foremost, I want to make it clear that I believe user training has a place and purpose. We cannot let our people proceed in a connected world without guidance and support. However, if I can’t train an algorithm to identify a potentially malicious email, is it really fair for me to expect an employee to be able to detect that malicious email?
Let’s take a look at an industry often perceived as having great cybersecurity practices: financial services. Financial services companies, quite intuitively, have a lot to lose when a hacker succeeds. As seen in the table below, financial services have nearly double the number of incidents as healthcare and also a larger budget per employee in training. However, the average credit card user has received a call prior to a fraudulent transaction being processed, as something looked suspect and a deviation from a buying pattern. Is this how financial services keeps the average cost of a breach lower than healthcare? I hypothesize the ability to be proactive in detecting potential issues directly impacts the monetary commitment to cybersecurity in the industry.
So why didn’t this same pattern emerge in healthcare? Perhaps it’s because our systems were not initially designed to be connected. Devices started out as analog, then as software ‘became a thing,’ the potential for improved clinical experiences emerged. Suddenly a modicum of data standardization meant patient health information could be more easily shared across the value chain. Rapidly adopting the USB, then the internet, to Bluetooth and now mobile/app-based care, the adoption of connectivity has been quick. The focus at every step, and justifiably so, was on enhancing the patient care experience. But with each new point of connectivity, who in the value chain took on the burden of increased connectivity and the potential cybersecurity vulnerabilities they introduced?
The medical device vendors used to deliver a device and ensure clinical operation and consider a contract fulfilled. The point of sale was the focus, and hospitals would carry the residual cybersecurity burden until the device finally reached the end of its life (often well beyond when a manufacturer recommended keeping a device).
As connectivity has become de facto, this transfer of cybersecurity ownership to HDOs is no longer sustainable. A single HDO has to manage tens of thousands of devices - often with a limited technical ability to modify devices, for fear of impacting regulatory approval and manufacturer warranties.
What Should We Do Going Forward
With the new administration and its commitment to prioritizing cybersecurity, it is anticipated the security of critical infrastructure will get a major overhaul. This includes the FDA prioritizing finalization of their premarket cybersecurity guidance in 2021.
This is further corroboration that healthcare cannot remain reactive to dealing with cybersecurity threats. Instead, we need to design our new systems with the intentionality of proactively protecting our users from them. Our systems must grow to prioritize reducing the extent of reliance on users against unknown threats. Note the nuance: I’m not saying the user doesn’t know how to use the device. I’m saying with tech, there will always be unknowns and there will always be weaknesses. The best systems are those which do not rely on the user as the detection, and, more importantly in patient care, the efficacy of a device. We must be intentional and prioritize designing security into devices if we are to ever change the landscape of cyberthreats in healthcare.
The reliance on technology will never go away. It has improved diagnostic capabilities, given us new treatment options, reduced time, effort, and risk for patients. Therefore, we must make the security component of this process a positive experience for the user and/or patient, as that can mean the difference between the success or failure of a cybercriminal.
Contact Vidya at: [email protected]