This page displays the entire model statute in one place. To jump to different sections within the page, use the menu on the right. To see more details about a specific section, including applicable technical resources, use the sidebar on the left to go to a dedicated page for that section.

Section 1: Definitions

The following terms are defined for use throughout the remainder of this document:

  • (a) an “artificial intelligence” or “AI” system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action1;
  • (b) a “first-party developer” is any legal person with direct involvement in the design, coding, training, or other development of an AI system2;
  • (c) a “third-party procurer” is any legal person that purchases, licenses, or otherwise acquires an AI system from a first-party developer;
  • (d) a “sensitive application” is the use of AI to determine eligibility or otherwise furnish a legal person with goods or services related to housing, healthcare, financial services, or employment;
  • (e) a “model” is any computational object that takes in data and outputs a result based on data that it has previously observed;
  • (f) “training” is the process of using a dataset to learn the parameters or otherwise build a model;
  • (g) a “training dataset” is any collection of data that is used to train a model;

Section 2: Requirements for first-party developers

Section 2.1: Assessment requirements, general

  • (a) Any first-party developer any legal person with direct involvement in the design, coding, training, or other development of an AI system
    of an AI system shall be required to do the following3:
    • (1) Internally document the training the process of using a dataset to learn the parameters or otherwise build a model
      and evaluation process, specifically:
      • (A) The provenance of the training dataset any collection of data that is used to train a model
        and evaluation dataset,
      • (B) The person(s) responsible for maintenance of the system, and
      • (C) Any risk assessments that have been completed for the system.
    • (2) Conduct rigorous testing of the AI system, specifically:
      • (A) Testing for disparities in performance,
      • (B) Evaluating the system’s performance for inputs that do not match the properties of any data on which the system was trained or evaluated (“out of distribution” data), and
  • (b) The Commission on AI Technology, as defined in Section 4, shall have the right to examine a first-party developer’s compliance with the requirements of Section 2.1(a) as they see fit. If the developer is found to be non-compliant, penalties as defined in Section 6 can apply.

Section 2.2: Assessment requirements, sensitive applications

In addition to the requirements of Section 2.1, if an AI system is used for the purpose of a sensitive application the use of AI to determine eligibility or otherwise furnish a legal person with goods or services related to housing, healthcare, financial services, or employment
, as defined in Section 1(d), the following requirements apply:

  • (a) The first-party developer any legal person with direct involvement in the design, coding, training, or other development of an AI system
    must quarterly report to the Commission on:
    • (1) compliance with the requirements of Section 2.1, and
    • (2) Any consumer reports of harms caused by the system.
  • (b) The first-party developer must annually report to the Commission on:
    • (1) any plans to update the system in a way that can substantially affect its performance.

Section 2.3: Forbidden applications

Development of AI systems for the following purposes are prohibited4:

  • (a) Determining a person’s race, sex, gender, or sexual orientation,
  • (b) Controlling a deadly weapon autonomously,
  • (c) Impersonation of an individual, and
  • (d) Purposeful generation of disinformation or misinformation.

Section 2.4: Claims about or by AI systems

  • (a) False or misleading claims regarding the capabilities of an AI system shall be considered a deceptive act or practice under applicable federal or state consumer protection law.
  • (b) If an AI system itself engages in a deceptive act or practice, any and all first-party developers of the system shall be held jointly and severally liable for any harms that result from the practice5.

Section 3: Requirements for third-party procurers

  • (a) Any third-party procurer any legal person that purchases, licenses, or otherwise acquires an AI system from a first-party developer
    of an AI system from a first-party developer any legal person with direct involvement in the design, coding, training, or other development of an AI system
    must verify that:
    • (1) The AI system is suitable for the application for which the procurer intends to use the system, and
    • (2) That the system has been tested by the first-party developer in situations sufficiently similar to those to which the system will be applied6.
  • (b) The requirements of section 3(a) do not apply when:
    • (1) The first-party developer has made an explicit warranty regarding the performance and applicability of the system to the procurer’s use case7, or
    • (2) The first-party developer agrees to assume the risk of any harms that may arise from the application of the system.

Section 4: Requirements for government entitites

  • (a) The government shall establish a Commission on AI Technology (“the Commission”) to serve the following functions:
    • (1) Review reports from any first-party developer any legal person with direct involvement in the design, coding, training, or other development of an AI system
      and take action as it deeps appropriate,
    • (2) Promulgate guidelines for the first-party developers and corresponding third-party procurer any legal person that purchases, licenses, or otherwise acquires an AI system from a first-party developer
      ,
    • (3) Collect reports from consumers on AI harms and share them with consumer protection agencies as appropriate.
  • (b) The Commission structure, appointment procedures, and term lengths shall be determined by the legislature8.

Section 5: Remedies

  • (a) Any individual who wishes to bring a civil action against a first-party developer any legal person with direct involvement in the design, coding, training, or other development of an AI system
    or third-party procurer any legal person that purchases, licenses, or otherwise acquires an AI system from a first-party developer
    shall not be barred from doing so solely on the basis of the fact that the harm arose from the actions of an AI system created by the defendant9.
  • (b) Increased risk of exposure of private information due to inclusion in an AI training dataset any collection of data that is used to train a model
    without permission shall be treated as both concrete and particularized for the purpose of any standing analysis conducted by the courts.
  • (c) Individuals also have the right to seek injunctive relief to temporarily halt the use of the AI system while the case is pending if there is an imminent risk to public safety.

Section 6: Penalties

  • Any of the following shall be valid penalties for the violation of the statutes in this code:
    • (a) Monetary recovery, whether for an individual plaintiff or by a consumer protection agency,
    • (b) Data deletion, including deletion of any model any computational object that takes in data and outputs a result based on data that it has previously observed
      trained with that data, and
    • (c) Individual liability for the President, CEO, and board of any corporation that is found to have wilfully violated these statutes at least three times.
  1. The definition of artificial intelligence is taken from the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110). 

  2. First-party developers do not include entities that acquire datasets that are used by other entities for training models. 

  3. These requirements are purposely written to be construed in a flexible way depending on the specific application. 

  4. The rationale for banning these applications is two-fold. Some, such as in (a), are scientifically unsound applications of AI technology. Others, like (b)-(d), have the potential to inflict large-scale bodily or psychological harms on an individual. 

  5. An example of 2.4(b) would be an AI customer support chatbot giving a customer incorrect information about their ability to obtain a refund. The parent company is to be held liable in the same way they would if a human support agent gave the incorrect information. 

  6. This verification can be as simple as ensuring that the intended use case is one that the first-party developer advertises or otherwise puts forward that its system can handle. 

  7. Such warranties can be made in advertising material, websites, sales contracts, published model cards, or other documentation regarding the system being procured. 

  8. This model statute does not put forth suggestions on this front as the procedures will largely depend on what level of government is adopting the statute. 

  9. Though AI systems cannot be agents of corporations in the legal sense because they are not legal persons, courts should treat any actions taken autonomously by an AI system created by a company as if an agent of the company took that action in the scope of their employment. 

Updated: