Contributors: Ryan Wesley Brown and Taylor Hertzler
To learn more about Ryan and Taylor, click here.

Lawmakers and regulators in many states are working to catch up with the rapid adoption of artificial intelligence (“AI”) into our daily lives, and several states now have comprehensive laws or rules addressing various uses of AI.
These laws cover everything from election interference to copyright ownership of AI-related content, but a consistent topic is healthcare and the considerable amount of data it entails. For example, some state laws include: restrictions to mitigate algorithmic discrimination by “high-risk” AI systems (defined to include any system giving healthcare recommendations); parameters around the use of AI in specific healthcare settings (e.g., eye imaging, mental health treatment, telehealth services, etc.); and transparency rules to ensure informed consent for patients receiving AI-assisted treatment.
Further, several state boards of medicine and other agencies are establishing their own rules for providers, such as requiring them to have a certain level of knowledge about AI before incorporating it into their practice and establishing standards of care for the use of AI in medicine. While no state AI law provides a comprehensive approach to this rapidly developing technology, or even to its use in healthcare, these laws and rules are beginning to create guardrails around AI use.
Additionally, as states pass novel AI laws, those laws are becoming templates for other states. Thus, while we have yet to see comprehensive rules governing the use of AI in healthcare, the beginnings of such rules are starting to appear.
Some state AI laws begin with the premise that AI can cause serious harm in the healthcare space, so the law should aim to avoid that harm. For example, Colorado’s SB 24-205 — seemingly the template for many similar state AI laws — requires anyone who develops or deploys a high-risk AI system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of “algorithmic discrimination.”1
Algorithmic discrimination is any situation in which the use of an AI system results in an unlawful differential treatment or impact that disfavors one individual or group on the basis of a protected class (e.g., age, color, race, etc.). Developers must also publicize various data about their AI system (e.g., known harms, known limitations, details on the training process and data, etc.), and deployers must develop and implement a risk management program to govern the use of AI systems, including conducting regular impact assessments.
Though this law’s scope is broader than just healthcare, it does include healthcare, and it creates substantial requirements for covered healthcare providers and entities. Further, several states are considering similar legislation (e.g., Maryland, Massachusetts, Nebraska, Rhode Island, Vermont), demonstrating the influence of SB 24-205 beyond Colorado.2
Other state AI laws begin with the premise that healthcare providers are going to use AI, so the law should ensure they do so appropriately. For example, Rhode Island’s Consumer Protection in Eye Care Act requires that providers who use “assessment mechanisms” (including AI devices) for eye assessments must, among other things, read and interpret all data gathered by the system, not rely on information obtained from an AI system as the sole basis for issuing a prescription, and personally sign all diagnoses, prescriptions, etc.3
Other states share this focus on ensuring that AI does not become a crutch and that providers remain actively involved in the provision of care: Georgia’s HB 203 is very similar to Rhode Island’s law, and Kentucky regulations provide that an asynchronous telehealth service may not be solely the result of reviewing an AI-generated interaction with a Medicaid patient.4
Several state medical boards and other agencies share this concern. New Mexico’s Medical Board has an Artificial Intelligence Policy that requires providers, among other things, to possess basic AI literacy, possess particular knowledge about any AI systems they use in their practice, and use AI as a tool that assists but does not replace their clinical reasoning and discretion.5
North Carolina’s Medical Board issued a position statement with similar requirements, along with a position statement establishing a standard of care for using AI for documentation tasks.6 Specifically, providers using AI must accept responsibility for responding appropriately to the AI’s recommendations, and they must ensure that any notes dictated by AI are accurate. Similar position statements or guidance documents have been issued by the Mississippi Medical Board, the Texas Nursing Board, and the Massachusetts and New Jersey state attorneys general.7 Thus, even while legislators work to catch up to AI, AI deployers in many states may be subject to non-legislative guidance regulating their AI use.
Finally, other state AI laws begin with the premise that informed consent — the cornerstone of medical ethics — requires transparency, so the law should ensure that patients receive such transparency when their providers use AI. California’s AB 3030, for example requires that if a provider uses AI to generate a written or verbal communication with a patient, that communication must include both a disclaimer that the communication was created by AI and instructions on how to contact a real person.8 These requirements complement another California law, AB 2013, which requires AI developers to publicly post documentation about the data used to train the AI system.
Several states have or are considering similar transparency laws. These efforts toward ensuring transparency likely reflect both an interest in maintaining informed consent as well as a concern over the black-box nature of many AI systems.
As this patchwork of state laws and regulations takes shape, it is critical for anyone developing or deploying AI in the healthcare industry to remain tuned into these legal developments. This is especially important for companies operating in multiple states or offering services online that may be required to comply with multiple competing legal and regulatory schemes.
Contact Ryan at: [email protected]
Contact Taylor at: [email protected]
References
-
S.B. 24-205, 75th Gen. Assemb., Reg. Sess. (Co. 2024).
-
See S.B. 0936, 2025 Leg., 447th Sess. (Md. 2025); H.B. 94, 149th Gen. Ct., Reg. Sess. (Mass. 2025); L.B. 642, 109th Leg., 1st Sess. (Neb. 2025); S. 0627, Gen. Assemb., Jan. Sess. (R.I. 2025); H. 341, Gen. Assemb., Reg. Sess. (Vt. 2025).
-
H.B. 6654, Gen. Assemb., Jan. Sess. (R.I. 2022).
-
H.B. 203, Gen. Assemb., Reg. Sess. (Ga. 2023); 907 Ky. Admin. Regs., 3.170 §6.
-
N.M. Medical Board, Artificial Intelligence Policy (Nov. 8, 2024), available at https://www.nmmb.state.nm.us/wp-content/uploads/2025/01/NMMB-AI-Policy-statement-11-24-1.pdf.
-
N.C. Medical Board, Position Statement 5.1.4 (amended Mar. 2024); N.C. Medical Board, Position Statement 3.2.1 (amended Nov. 2024).
-
Miss. State Board of Medical Licensure, Admin. Code r. 13.2; Tex. Board of Nursing, Position Statement 15.31 (rev. Jan. 2025); Mass. Attorney General, Attorney General Advisory on the Application of the Commonwealth’s Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence (Apr. 12, 2024); N.J. Office of the Attorney General, Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination (Jan. 2025).
-
A.B. 2013, Reg. Sess. (Ca. 2024).