Affidavit: Healthcare and the Law - Old Law, New Tricks: Achieving Compliance when Patient Privacy Laws Meet Artificial Intelligence

Contributor: Ryan Wesley Brown
To learn more about Ryan, click here.

 

Artificial intelligence (“AI”) is a buzzworthy topic in healthcare, and commentators have suggested it will profoundly alter the delivery of healthcare.1  However, the law moves slowly and tends to be locked in a perpetual game of catch-up with technology. Therefore, as lawyers, we are often asked to help our clients mitigate the legal risk that can arise when antiquated laws are applied to emerging technologies.  One significant area of friction is the application of existing privacy laws, such as Health Insurance Portability and Accountability Act (“HIPAA”) to AI technologies.2  Until the law is changed, stakeholders will need to play by the current rules, and that can require creative and critical thinking.

In this article, we explore three legal concepts requiring extra thought when applied to AI. For simplicity’s sake, we have used the term AI as a blanket term to refer to all forms of computer-simulated intelligence relying on machine learning technologies such as deep learning, computer vision, natural language processing (NLP), and any other related technologies.

I. De-Identification/Re-Identification

The central motivating tenet of HIPAA is the privacy of patients’ Protected Health Information (“PHI”).3  One way to protect patient privacy under HIPAA is to “de-identify” data prior to disclosure such that it is no longer considered PHI.4  De-identified data is health information which “does not identify an individual and with respect to which there is no reasonable basis to believe that the information can be used to identify an individual. . .”5  Stated simply, de-identified data is data that cannot be used alone or in combination with any other information to identify the subject.6  This generally goes far beyond simply redacting names and dates.

However, AI upends the way we think about de-identification. In 2019, a study showed it is possible to use AI to re-identify patient information that had been de-identified.7  This ability raises the question: If the data could be re-identified, was it ever truly de-identified in the first place? The ability to re-identify data poses a risk of litigation. A recent lawsuit against the University of Chicago, UChicago Medicine, and Google alleged the health system did not properly de-identify PHI that it shared with Google because Google was capable of re-identifying it.8  It is not necessary to show the data has actually been re-identified to prove a HIPAA violation occurred, but merely there is a “reasonable basis to be believe the information can be used to identify an individual.”9  Although the trial court dismissed the Google/Chicago case in late 2020, an appeal is currently pending in the Seventh Circuit Court of Appeals, and this issue remains unresolved. In light of this uncertainty, extra scrutiny should apply to any arrangements relying on de-identified data.

II. AI Access to Data

As a gross simplification, developers build AI models by feeding them data and allowing them to learn from that data. It follows, therefore, that a change in that data will alter the AI model. Therefore, for many AI models long-term (or perpetual) access to the underlying dataset is critical to the AI product (and its market value).

In a healthcare setting, the underlying data for an AI model is almost certainly going to include PHI. For example, an AI tool might require patient photos in order to train the model. The most readily available source of this data is hospitals, facilities, and medical practices. Therefore, when parties come together to negotiate a data-sharing arrangement for an AI application, it is essential to consider the disposition of the data fed to the AI. If the exchange of data will require the exchange of PHI, HIPAA requires the parties to enter into a Business Associate Agreement (“BAA”). A BAA should always include the conditions under which the parties may terminate the relationship as well as the Business Associate’s responsibilities with respect to destruction or return of PHI upon termination. Although complicated by the previous discussion, if the model can rely on de-identified data (such that it is no longer PHI under HIPAA), this may obviate the need for a BAA. However, even for non-HIPAA protected data, the parties should still take a proactive approach and explicitly spell out their expectations with respect to the retention, return, or destruction of data in the event the parties terminate their agreement.

III. Privacy Policies and Consents

The Federal Trade Commission (“FTC”) scrutinizes companies’ privacy practices for “unfair or deceptive acts or practices” under the FTC Act.10  On April 19, 2021, the FTC published an article describing its enforcement priorities in the AI space.11  The FTC admonishes industry players to, among other things, “[w]atch out for discriminatory outcomes,” “[e]mbrace transparency and independence, “[d]on’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results,” and “[t]ell the truth about how you use data.”12  

Transparency is the best way to ensure a high level of ethics and avoid unwanted scrutiny from the FTC or other investigatory agencies. Any actor developing, implementing, or utilizing AI should strive to ensure people understand how and when their data is used. This should include developing and adhering to robust privacy policies that dovetail with similarly robust informed consent processes. While it is self-evident that informed consent requires the patient actually be informed, it can be challenging to explain AI concepts in layperson’s terms. Nevertheless, this is a critical step. Putting in the work to ensure patients understand what they are consenting to on the front end is the most ethical approach, avoids many later headaches, and potentially provides a backstop for liability in the future.

IV. Conclusion

AI creates exciting new opportunities to improve health outcomes and patient experience. Nevertheless, until the law catches up, trusted legal counsel can advise you on the best steps to take to avoid harming patient privacy, minimize legal exposure, and protect your reputation and the reputation of your entity or institution.


Contact Ryan at:
[email protected]

Disclaimer: This article has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. 

References

  1. B. Meskó & M. Görög, A Short Guide for Medical Professionals in the Era of Artificial Intelligence, 3 npj Digit. Med. 126, Sept. 24, 2020.

  2. 42 U.S.C. § 1320d, et seq., see also 45 CFR Parts 160, 162, and 164.

  3. 45 CFR § 164.502.

  4. Id § 164.514.

  5. Id. (emphasis added).

  6. Id.

  7. Na L, Yang C, Lo C, Zhao F, Fukuoka Y, Aswani A. Feasibility of Reidentifying Individuals in Large National Physical Activity Data Sets From Which Protected Health Information Has Been Removed With Use of Machine Learning. JAMA Netw Open. 2018;1(8):e186040. doi:10.1001/jamanetworkopen.2018.6040

  8. Dinerstein v. Google, LLC, 484 F. Supp. 3d 561 (N.D. Ill. 2020), appeal docketed, No. 20-3134 (7th Cir. Nov. 2, 2020).

  9. 45 CFR § 164.514.

  10. 15 U.S.C. § 57b-1.

  11. Fed. Trade Comm’n, Aiming for Truth, Fairness and Equity in Your Company’s Use of AI (April 19, 2021), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

  12. Id.