Affidavit: Healthcare and the Law - Generative AI in Healthcare: Promise and Pitfalls

Contributor: Matthew C. Mousley
To learn more about Matt, click here.

 

Generative artificial intelligence (AI) remains a hot topic in legal and healthcare circles, but the conversation has shifted from the initial wonder of “What can it do?” to the present cautiousness of “What should it not do?” One reason for this shift came in March 2023, when Google revealed the newest version of its Med-PaLM, a Large Language Model (LLM) that passed the United States Medical Licensing Examination with an 86.5% accuracy rate.1 A 2022 version had achieved a 67.2% accuracy rate, also a passing score.2

With the advent of generative AI models like Med-PaLM and ChatGPT, providers can now type complex medical questions into a chat box and receive sophisticated (and hopefully accurate) answers. This ability surpasses previous AI applications in the potential to serve patients, but also in the potential to run afoul of laws like corporate practice of medicine (CPOM) rules, the False Claims Act (FCA), and FDA regulations. These concerns — on top of the risk of a generative AI model fabricating answers, known as “hallucinations” — mean that providers should proceed with extreme caution before implementing generative AI tools into their practices.

We have covered the importance of privacy and informed consent in previous column articles, but the use of AI raises unique issues regarding the potential to violate privacy rules. As covered before, even the best efforts at data deidentification can be thwarted, but specific to the use of AI tools, a person who knows the right questions to ask may be able to solicit output or deduce information that should be hidden. Further, most generative AI models learn from each interaction with them, causing potential privacy issues with any question or input to a model that itself contains confidential or personal information.

The use of generative AI tools also raises numerous legal questions with respect to CPOM rules. CPOM statutes typically prohibit any person without a medical license from practicing medicine.3 Because only natural persons can obtain a medical license, this means that corporations and other entities cannot practice medicine.4 Neither can an AI model obtain a medical license. So, when a user asks an AI model for medical advice and receives a sophisticated response that could plausibly have been provided by a licensed provider, CPOM rules may ask where is the line between providing information and practicing medicine? And, as is the case in all such AI applications, if the use of an AI model does cross that line, who may have exposure for a CPOM rule violation?

This potential liability question exists at various levels with respect to the use of AI applications, but it becomes a major concern when providers can ask complex questions in written English the same way they would ask a qualified human those questions. When a provider relies on an AI tool’s output, a patient suffers harm, and the patient sues the provider, who may have liability for that harm? And more importantly, since AI models are known to make errors, how can providers use AI tools in a way that avoids harmful answers?

These questions are both legal and ethical, prompting ethicists to propose frameworks and call for regulation to address these problems.5 Until such regulations arrive, navigating the legal implications of these risks remains a blurry business; once the regulations arrive, they may provide clarity and mitigate patient harm, but every new regulatory regime raises novel issues and requires new expertise to navigate.

The question of who is performing a medical service poses a problem in the reimbursement space as well. If a provider uses an AI tool to perform a service, can the provider bill a payor for the performance of that service? Payors may argue that under certain circumstances an AI tool is performing a service, and a provider is not. And because payors do not reimburse for services not performed by providers, the payor may not reimburse that service. As a result, AI could lead to provider-payor disputes and, potentially, revenue losses for practices.

Further, and specific to claims submitted to federally-funded payors, providers may be at risk of potential FCA violations. Billing Medicare and Medicaid for services not actually rendered is a false claim punishable by participation exclusion, fines of up to $20,000 per claim, and imprisonment of up to five years, among other penalties.Under certain circumstances, CMS might argue that a provider who bills Medicare for a service performed by an AI tool has submitted a false claim. Other hypothetical FCA violations, under the right circumstances, could include, for example: acting on an AI tool’s recommendation of unnecessary services; an AI tool’s use lowering the level of provider decision-making required for emergency department claims, resulting in a claim that does not justify the level billed; and having an AI tool in the reimbursement department up-code claims.

Haphazard use of generative AI could also potentially get providers in trouble with the FDA. Under certain circumstances, FDA might argue that certain use of AI in a healthcare facility is “intended for use in the diagnosis of disease or other conditions,” thus satisfying FDA’s definition of “medical device” and subjecting such AI use to FDA regulation.FDA, of course, has a series of steps one must take before receiving approval for a medical device. This consideration primarily concerns AI model and application developers, but anyone using the AI model should be aware of the implications of potential FDA regulation as well.

One last risk to consider is fabrication or “hallucinations.” Generative AI models are not programmed to tell the truth, they are programmed to produce answers that match an algorithm that was itself trained on real-world data. Typically, matching an algorithm trained on real-world data is a very good proxy for matching the truth. But it is only that: a proxy. Sometimes, a completely fabricated answer will match the algorithm better than a true one, especially where no good answer exists to the question posed. Generative AI models have fabricated basic facts and even complete citations out of whole cloth, all because those fake answers better matched the model’s algorithm than any other answer it could generate.8

Those in legal circles will be familiar with a recent lawsuit where an attorney used ChatGPT to write his response to a motion to dismiss, and the response contained several perfectly formatted citations that were completely made up.9 These fake answers look — by design — very real, making them difficult to spot unless one is looking for them or independently verifying them. This possibility of realistic falsehoods calls into question the extent to which providers ought to use generative AI and again alters the liability calculus in the event of a harmful answer.

And, of course, the use of AI is susceptible to familiar problems with the use of any computer technology: a provider’s computer network can always be hacked, a provider can type the wrong information when asking an AI tool a question, an AI model or application developer can make an error in development that increases the probability of producing a wrong answer for a patient, etc. Therefore, although recent developments bring many new opportunities to improve patient care, they bring at least as many potential legal pitfalls. As the saying goes, AI will not replace doctors; doctors who use AI will replace doctors who don’t. Those doctors, however, will succeed in using AI only if they do so with proper caution.


Contact Matt at: [email protected]

Disclaimer: This article has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice.

 

References

  1. Karan Singhal et al., Towards Expert-Level Medical Question Answering with Large Language Models 1 (May 16, 2023) (unpublished article preprint), https://arxiv.org/pdf/2305.09617.pdf.
  2. Id.
  3. For example, Illinois’s CPOM statute provides that “[n]o person shall practice medicine . . . without a valid, active license to do so.” 225 Ill. Comp. Stat. 60/3.
  4. See The People v. United Medical Service, 362 Ill. 442, 454 (1936) (“No corporation can meet the requirements of the statute essential to the issuance of a license.”).
  5. E.g., Stefan Harrer, Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine, 90 eBioMedicine (2023).
  6. 42 U.S.C. § 1320a-7a(a)(1)(A); 18 U.S.C. § 287.
  7. 21 U.S.C. 321(h).
  8. Mehul Bhattacharyya et al., High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content, 15 Cureus 1 (2023).
  9. Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023).