Bill Summaries: S735 (2025-2026 Session)

Tracking:
  • Summary date: Apr 2 2025 - View summary

    Contains whereas clauses.

    Section 1.

    Enacts Part 18A, Artificial Intelligence (AI) Innovation, to Article 10 of Chapter 143B, as follows. Establishes the AI Innovation Trust Fund (Fund) in GS 143B-472.83A, with the Secretary of Commerce (Secretary) as trustee, to (1) provide grants or other financial assistance to companies developing or deploying artificial intelligence models in key industry sectors or (2) establish or promote artificial intelligence entrepreneurship programs, which may include partnerships with research institutions in the State or other entrepreneur support organizations. Prohibits the Fund from supporting projects involving AI intended for mass surveillance infringing constitutional rights, unlawful social scoring, discriminatory profiling based on protected characteristics, or generating deceptive digital content intended for fraudulent or electoral interference purposes. Defines sixteen terms, including:

    • covered model (an AI model that, due to its scale, application domain, or potential impact, is identified by the Secretary as warranting proportionate regulatory oversight).
    • covered model derivative (copy of the covered model that is either unmodified or been altered as described).
    • critical harm (a harm caused or materially enabled by a covered model or covered model derivative including creation or use in a manner that results in mass casualties of a chemical, biological, radiological; nuclear weapon or mass casualties or at least $500,000 in damage from cyberattacks or that results from the AI model engaging in conduct with limited human oversight; or other grave harms, as described).
    • covered entity (the legally responsible organization, corporation, or entity that directly oversees and controls the development, deployment, and ongoing operations of a covered model or covered model derivative).
    • computing cluster (a set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10 to the power of 20 integer or floating-point operations per second and can be used for training AI).

    Authorizes the Secretary to convene an AI Innovation and Safety Advisory Panel to provide recommendations, best practices, and advice regarding AI technologies, compliance proportionality, and ethical AI-human collaboration.

    Enacts GS 143B-472.83B requiring a developer to comply with eight provisions before beginning to train a covered model, including (1) implementing reasonable administrative, technical and physical cybersecurity protections to prevent unauthorized access to, misuse of or unsafe post-training modifications of the covered model and all covered model derivatives controlled by the developer that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors; (2) implementing written and separate safety security protocol as described, with annual reviews; and (3) implementing the ability to promptly enact a full shutdown. Further requires a developer to comply with four provisions before using a covered model or covered model derivative for a purpose not exclusively related to the training or reasonable evaluation of the covered model for compliance with State or federal law or before making a covered model or covered model derivative available for commercial, public or foreseeably public use, including assessing whether the covered model is reasonably capable of causing or materially enabling a critical harm and taking reasonable care to implement appropriate safeguards to prevent the covered model and covered model derivatives from causing or materially enabling a critical harm.

    Prevents a developer from using a covered model or covered model derivative for a purpose not exclusively related to the training or reasonable evaluation of the covered model for compliance with State or federal law or making a covered model or a covered model derivative available for commercial, public or foreseeably public use if there is an unreasonable risk that the covered model or covered model derivative will cause or materially enable a critical harm. Requires a developer to annually review their safeguards and to have an independent investigation done by a third party to ensure compliance with GS 143B-472.83B, as described. Provides for the developer to submit a statement of compliance, as described, to the Attorney General (AG) throughout the life of the covered model or its derivatives. Requires a developer to report each AI safety incident impacting the covered model or its derivative to the AG, as described. Exempts the development, use or commercial or public release of a covered model or covered model derivative for any use that is not the subject of a contract with a federal government entity, even if that covered model or covered model derivative was developed, trained or used by a federal government entity; provided, however, that GS 143B-472.83B does not apply to a product or service to the extent that compliance would strictly conflict with the terms of a contract between a federal government entity and the developer of a covered model.

    Allows the Secretary to develop and propose a tiered compliance framework, as described, to the General Assembly for potential adoption. Specifies that a developer or covered entity may remain responsible for foreseeable critical harms arising from misuse or unintended use of a covered model or derivative, irrespective of whether such misuse involved fine-tuning. Allows covered entities funded under the Act developing AI systems that significantly impact individuals' rights or access to critical services such as employment, housing, education, or financial products to conduct exploratory algorithmic fairness assessments to detect and mitigate potential bias. Further authorizes covered entities to voluntarily explore methods for disclosing to end-users when they are interacting with an AI system.

    Enacts GS 143B-472.83C requiring a person that operates a computing cluster to implement written policies and procedures to do the five specified things when a customer uses computer resources which would be sufficient to train a covered model including, (1) obtaining the customers basic identifying and business purpose for using the cluster and (2) implementing the capability to promptly enact a full shutdown of any resources being used to train or operate a covered model under the customer’s control. Requires operators of computing clusters to consider industry best practices as described. Authorizes an operator to impose reasonable requirements on customers to prevent the collection or retention of personal information that the person operating such computing cluster would not otherwise collect or retain, including a requirement that a corporate customer submit corporate contact information rather than information that would identify a specific individual.

    Enacts GS 143B-472.83D, giving the AG enforcement authority over the Act, as described. Specifies that nothing in new Part 18A should be construed as creating a new private right of action or serving as the basis for a private right of action that would not otherwise have had a basis under any other law but for the enactment of new Part 18A. Enacts GS 143B-472.83E preventing developers of a covered model or a subcontractor of the developer from engaging in three practices including preventing or retaliating against any employee from disclosing information to the AG demonstrating the developer is not in compliance with the Act or that the technology poses an unreasonable risk. Allows an employee harmed by a violation to petition the court for appropriate relief. Allows the AG to publicly release any complaint if they conclude it will serve the public interest. Requires the developer to (1) provide clear notice to all employees of their rights and responsibilities GS 143B-472.83E, (2) provide a reasonable internal process through which an employee, contractor, subcontractor or employee of a contractor or subcontractor working on a covered model or covered model derivative may anonymously disclose information to the developer if the employee believes, in good faith, that the developer has violated any provision of this chapter or any other general or special law, has made false or materially misleading statements related to its safety and security protocol or has failed to disclose known risks to employees, to conduct an investigation on the disclosed information, and to maintain its records on the matter as described.

    Enacts GS 143B-472.83, requiring the Secretary to submit an annual report to the NCGA by January 31 on AI, including the matters specified. Further requires the Secretary to promulgate regulations for the implementation, administration and enforcement of new Part 18A, and authorizes the Secretary to convene an advisory board to assist it in its rulemaking.  Requires the Secretary to make annual updates to its regulations, as described.

    Appropriates $750,000 from the General Fund to DOC for 2025-26 to accomplish the purposes of the act.

    Effective July 1, 2025.