Translating Tech Policy Experimentation Into Practice

Model Legislation for Tech Policy Reform

Matt Perault and Scott Babwah Brennen

In a paper published today by the Brookings Institution, we argue that policymakers should pursue experimental policy options as a way to improve the substance of tech policy reform and accelerate its pace. Regulators often presume that outcomes are certain and that we know how a particular bill will unfold in practice. The reality is that the impacts of products and policies in the tech sector are often highly uncertain, and not taking this uncertainty into account can lead to harmful unintended consequences. Regulation benefits when policymakers can make more accurate assessments of the potential benefits or drawbacks of proposed interventions. 

We propose a more curious approach to policy development, where policymakers test new governance frameworks and facilitate tech company trials of new products that address critical policy issues in the field. This experimentation should be accompanied by rigorous oversight, transparency, and data sharing that will help regulators to learn about the impacts of products and policy tools. 

To translate this theory of experimentation to practice, we propose model state legislation in four areas of technology policy: online violence, user choice in content moderation, child safety, and generative artificial intelligence (GAI). While we believe state, federal, and international technology policy would benefit from a more curious approach, we propose model state legislation given the leadership role states have recently taken in state policymaking. 

The model legislation employs three different regulatory techniques:

  • Regulatory sandboxes: The online violence and user choice bills use regulatory sandboxes to enable platforms to test new products. A regulatory sandbox tests new products while providing regulatory relief for the duration of the trial.
  • Policy experiments: The child safety bill uses a policy experiment to trial an age verification certification program. Policy experiments test new regulations in timebound, controlled settings.
  • Policy Product Trials: The GAI bill uses a new hybrid framework described in our Brookings paper – the Policy Product Trial (PPT) experiment – to test an intermediary liability regime that will enable companies to experiment with different GAI tools, like chatbots and search engine optimizations. While policy experiments test new policies on existing products, and sandboxes test new products by relaxing existing policies, the PPT model would involve a government-run test of both new products and new policies simultaneously. A PPT experiment is short and time-limited, involves a trial of new products under new regulations, but may also provide short-term regulatory relief where necessary.

Our goal is to use these different experimental models to enable policymakers to learn about the implications of new regulatory frameworks and new technologies and to create smarter governance regimes in the future.

Use these links to navigate the page:


A State Regulatory Sandbox to Combat Online Violence

Policymakers have long expressed concerns about harmful online content, particularly when it might spur offline violence or domestic terrorism. Platforms have faced myriad challenges in responding to these concerns, including risks related to data sharing, and criticism that their actions result in unwarranted censorship.

To address some of these challenges, policymakers could enact a four-year regulatory sandbox, where participating companies test out new approaches to combat online violence. For example, companies could agree to share personal information of users that have been actioned or removed for terrorist recruitment, distribution of terrorist content, or glorification of terrorist action. Platforms could share data with each other, as well as with researchers studying online violence. Platforms could also test moderation tools to identify, remove, or deprioritize content that promotes or organizes violence or domestic terrorism. The government would agree to not enforce existing privacy, consumer protection, or antitrust laws for program activities undertaken in good faith. 

Importantly, there must be limitations on both platform and government access to these shared data. Participating platforms must agree to not use these data beyond the requirements set out in the sandbox. Unless required by law, these data would not be shared with law enforcement.

A committee, described in our model legislation as a Sandbox Audit Committee (SAC), would monitor the performance of this sandbox, including publishing annual reports that describe the benefits and harms from the trials conducted as part of it. Admitted platforms must agree to provide relevant data to the SAC so that it can assess performance and compliance.

Key elements of the state regulatory sandbox to combat online violence: 

  • Platforms may apply to the sandbox to test products and features to combat online violence, including but not limited to, the following:
    • Platform policies and enforcement procedures to identify, remove, or deprioritize content that promotes or organizes violence or domestic terrorism. These tools may include algorithm-based approaches.
    • Threat intelligence sharing, including basic subscriber information for accounts of citizens of the state that are suspended due to violent or extremist content. Data could be shared from the government to platforms or from platforms to other platforms. Note that the Electronic Communications Privacy Act governs disclosures of data from platforms to the government.
    • Proactively sharing depersonalized data with law enforcement officials in emergency situations related to content that promotes or organizes violence or domestic terrorism.
    • Sharing depersonalized data with researchers related to content that promotes or organizes violence or domestic terrorism.
  • Applicants shall be admitted to the sandbox based on the following criteria:
    • A proposal for a product or feature that may reduce violence or domestic terrorism or aid law enforcement in the prosecution of crimes related to violence or domestic terrorism.
    • A credible strategy for minimizing risks from harmful content, censorship, privacy violations, security breaches, anticompetitive conduct, and other consumer harms. Particular emphasis should be placed on potential impacts on vulnerable populations, including people of color and rural populations.
    • A commitment to abide by the other requirements of the program, including sharing relevant data with the Sandbox Audit Committee and not using the data for any purpose unrelated to the sandbox, such as advertising.
  • Admitted applicants would receive narrowly-tailored regulatory relief from the following state laws:
    • Publisher liability, including defamation
    • Unfair and Deceptive Acts and Practices laws (UDAP)
    • Privacy and data breach laws
    • Antitrust laws 
  • Performance of the sandbox will be monitored and evaluated by a Sandbox Audit Committee, which will have responsibility for overseeing the implementation of the program and participants’ compliance with its terms. To enable this oversight, all sandbox participants must commit to best efforts to share data with the Sandbox Audit Committee (SAC).
    • The SAC shall be composed of experts in online safety, content moderation and intermediary liability, privacy, and technology. 
    • The SAC shall assess the costs and benefits of the products tested in the sandbox. In particular, the SAC shall report on benefits and harms to user safety, security, privacy, expression, competitiveness, and innovation. 
    • The SAC shall evaluate harms of tested products and features to vulnerable communities. 
    • The SAC shall request data that will enable it to assess the efficacy of the platform interventions tested in the sandbox, as well as potential social harms from these interventions (including harms to user safety, competitiveness, and innovation). The SAC shall also evaluate the impact to freedom of expression and the impact on vulnerable communities.
    • The SAC shall publish an annual report within 360 days of the enactment of this law, and every year thereafter. The report shall assess the costs and benefits of the products tested in the sandbox. It shall include an analysis of the harm of tested products to vulnerable communities, including children. The report shall also include recommendations for lawmakers on public policy that will help to combat online violence.
    • Each participant shall also publish an interim report no more than 12 months after becoming a sandbox participant and a final report no more than six months after the conclusion of participation in the sandbox.
  • The sandbox shall sunset four years after its enactment. The legislature and governor may agree to a finite extension of the sandbox prior to its sunset.

Click the arrow below to view model legislation:

COMBATTING ONLINE VIOLENCE SANDBOX ACT

Purpose: This proposal allows online platforms to test products and features to mitigate online violence.

Section 1. Definitions.

  1. “Applicable agency” means a Department or agency of the state that by law regulates certain types of technology related business activity in the state and persons engaged in such technology related business activity.
  2. “Applicant” means an individual or entity that is applying to participate in the Combatting Online Violence Sandbox.
  3. “Consumer” means a person who uses a platform or technology offered by a specific sandbox participant.
  4. “Department” means the Department of Commerce that is responsible for overseeing the Combatting Online Violence Sandbox.

Section 2. Application Process.

  1. The Department of Commerce creates the Combatting Online Violence Sandbox.
  2. In administering the Combatting Online Violence Sandbox, the Department:
    1. Shall consult with each applicable agency;
    2. Establish a program to enable an online platform operating in the state to test products and features aimed at reducing violence or domestic terrorism; and
    3. May enter into agreements with or adopt the best practices of the Department of Justice, the Federal Trade Commission, or other states that are administering similar programs.
  3. An applicant for the Combatting Online Violence Sandbox shall provide to the Department an application in a form prescribed by the Department that:
    1. Includes a nonrefundable application fee of $1000 to support the administration of the program.
    2. Demonstrates the applicant is subject to the jurisdiction of the state.
    3. Demonstrates the applicant has established a physical or virtual location that is adequately accessible to the Department, where all required records, documents, and data will be maintained.
    4. Demonstrates that the applicant has the necessary personnel, technical expertise, and plan to monitor and assess the product or feature.
    5. Contains a description of the product or feature to be tested, including statements regarding the following:
      1. How the product or feature will combat online violence, including interventions such as the following:<
        1. Platform policies and enforcement procedures to identify, remove, or deprioritize content that promotes or organizes violence or domestic terrorism. These tools may include algorithm-based approaches.
        2. Threat intelligence sharing, including basic subscriber information for accounts of citizens of the state that are suspended due to violent or extremist content. Algorithmic interventions to identify, remove, or deprioritize content that promotes or organizes violence or domestic terrorism.
        3. Proactively sharing depersonalized data with law enforcement officials in emergency situations related to content that promotes or organizes violence or domestic terrorism, in accordance with 18 U.S.C. §2702(c)(4).
        4. Sharing depersonalized data with researchers related to content that promotes or organizes violence or domestic terrorism.
      2. What harms the product or feature may impose on consumers, including harms to safety, privacy, competitiveness, and innovation; harms to freedom of expression; and harms to vulnerable communities, including people of color and rural communities;
      3. How the applicant will mitigate these risks during its participation in the sandbox;
      4. How participating in the Combatting Online Violence Sandbox would enable a successful test of the product or feature;
      5. What data the participant will track during the test, and how the participant will mitigate any related privacy risks so as to facilitate data sharing with the Sandbox Audit Committee;
      6. How the applicant will end the test, evaluate whether the test was successful, and protect consumers if the test fails.
      7. Includes the applicant’s commitment to not use data received from other platforms as part of the Combatting Online Violence Sandbox for purposes unrelated to the sandbox.
  4. An applicant shall file a separate application for each product or feature the applicant wants to test.
  5. Before approving the application, the Department may seek any additional information from the applicant that the Department determines is necessary,
  6. Not later than 60 days after the day on which a complete application is received by the Department, the Department shall inform the applicant as to whether the application is approved for entry into the Combatting Online Violence Sandbox.
  7. The Department and an applicant may mutually agree to extend the 60-day timeline as described in Subsection (6) for the Department to determine whether an application is approved for entry into the Combatting Online Violence Sandbox.
  8. In reviewing an application under this section:
    1. The Department shall consult with, and get approval from, each applicable agency before admitting an applicant into the sandbox. The consultation with an applicable agency may include seeking information about whether:
      1. The applicable agency has previously investigated, sanctioned, or pursued legal action against the applicant,
      2. Whether certain regulations should not be waived even if the applicant is accepted into the Combatting Online Violence Sandbox.
    2. The Department shall identify how the applicant’s proposed product or feature is subject to licensing or other authorization requirements outside of the Combatting Online Violence Sandbox.
  9. In reviewing an application under this section, the Department shall consider whether a competitor to the applicant is or has been a sandbox participant and weigh that as a factor in allowing the applicant to also become a Combatting Online Violence Sandbox participant.
  10. If the Department and each applicable agency approve admitting an applicant into the Combatting Online Violence Sandbox, an applicant may become a Combatting Online Violence Sandbox participant.
  11. The Department may deny any application submitted under this section, for any reason, at the Department’s discretion.
  12. If the Department denies an application submitted under this section, the Department shall provide to the applicant a written description of the reasons for the denial.

Section 3. Regulatory Relief.

  1. If the Department approves an applicant to test a product or feature in the Combatting Online Violence Sandbox, then the applicant shall be granted regulatory relief from any enforcement action under the state’s publisher liability, consumer protection, privacy laws, or antitrust laws for any product or feature that is tested in the Combatting Online Violence Sandbox.
  2. The scope of the regulatory relief shall be narrowly tailored to specific products tested in the Combatting Online Violence Sandbox. Applicants receive no regulatory relief from products, services, programs or conduct outside of the sandbox.
  3. The Department shall provide applicable agencies with a detailed description of the regulatory relief offered to Combatting Online Violence Sandbox participants.
  4. Applicable agencies may provide recommendations to the Department on participation should be terminated for individual participants, or whether subsequent applications should be denied.
  5. The Department may propose potential reciprocity agreements between states that use or are proposing to use similar regulatory sandbox programs.

Section 4. Test Period.

  1. If the Department approves an application, the Combatting Online Violence Sandbox participant has 12 months after the day on which the application was approved to test the product or feature described in the participant’s application.
  2. This section does not restrict a Combatting Online Violence Sandbox participant who holds a license or other authorization in another jurisdiction from acting in accordance with that license or other authorization.
  3. By written notice, the Department may end a Combatting Online Violence Sandbox participant’s participation at any time and for any reason, including if the Department determines the participant is not operating in good faith to bring the proposed product or feature to market or share relevant data with the Sandbox Audit Committee.
  4. The Department and the Department’s employees are not liable for any business losses or the recouping of application expenses related to the Combatting Online Violence Sandbox including for;
    1. Denying an applicant’s application to participate in the Combatting Online Violence Sandbox for any reason; or
    2. Ending a Combatting Online Violence Sandbox participant’s participation in the sandbox at any time and for any reason.
  5. No guaranty association in the state may be held liable for business losses or liabilities incurred as a result of activities undertaken by a participant in the Combatting Online Violence Sandbox.

Section 5. Consumer Transparency.

  1. Before providing a product or feature in the state as part of the Combatting Online Violence Sandbox, a participant shall disclose the following:
    1. That the product or feature is authorized to be tested pursuant to the Combatting Online Violence Sandbox;
    2. That the product or feature is undergoing testing and therefore may expose the customer to risk;
    3. That the provider of the product or feature receives regulatory relief from enforcement under the state's consumer protection, antitrust, and privacy laws, but is otherwise not immune from other state laws, federal enforcement, enforcement from other states, or civil liability for any losses or damages caused by the product or feature.
    4. That the state does not endorse or recommend the product or feature that is being tested;
    5. That the product or feature is a temporary test that may be discontinued at the end of the testing period;
    6. The expected end date of the testing period; and
    7. That a consumer may contact the Department to file a complaint regarding the protect or feature being tested and provide the Department’s contact information.
  2. The disclosures required by Subsection (1) shall be provided to a consumer in a clear and conspicuous form. A statement in a participant’s Terms of Service or in a publicly available blog is clear and conspicuous.

Section 6. Exiting the Combatting Online Violence Sandbox.

  1. At least 30 days before the end of the 12-month Combatting Online Violence Sandbox testing period, a participant shall:
    1. Notify the Department that the participant will exit the sandbox, or
    2. Seek an extension in accordance with the section below.
  2. Subject to Subsection (3), if the Department does not receive notification as required by Subsection (1), the Combatting Online Violence Sandbox testing period ends at the end of the 12-month testing period. At the conclusion of the testing period, the participant will no longer receive regulatory relief for the product or feature.

Section 7. Extension.

  1. Not later than 30 days before the end of the 12-month Combatting Online Violence Sandbox testing period, a participant may request an extension of the sandbox testing period of up to 12 months.
  2. The Department shall grant or deny a request for an extension in accordance with Subsection (1) by the end of the 12-month Combatting Online Violence Sandbox testing period.

Section 8. Reporting Requirements and the Sandbox Audit Committee.

  1. A Combatting Online Violence Sandbox participant shall retain records, documents, and data produced in the ordinary course of business regarding a product or feature tested in the sandbox.
  2. The Department shall establish a Sandbox Audit Committee (SAC) to monitor performance of the sandbox, including the impact of the product or feature on the promotion of violence and domestic terrorism.
  3. The SAC shall be composed of 11 representatives, including at least two participants from the legislature, executive branch agencies, academia, industry, and technology nonprofit organizations.
  4. A Combatting Online Violence Sandbox participant shall submit an interim report to the SAC within 12 months of starting the sandbox. Participants shall also submit a final report to the Subcommittee within six months of concluding the sandbox. Each report shall include the following information:
    1. the impact of the participant’s tested products and features on online violence;
    2. the impact of the participant’s tested products and features on safety, privacy, competitiveness, innovation, and freedom of expression. This impact assessment should include an assessment of the impact on vulnerable communities, including people of color and rural communities;
    3. the effectiveness of the Combatting Online Violence Sandbox, including the feasibility of accessing and using the sandbox; and
    4. recommendations for lawmakers on public policy that will combat online violence.
  5. The SAC may request additional records, documents, and data from a Combatting Online Violence Sandbox participant that will not be disclosed publicly, and, upon the SAC’s request, a participant shall make such records, documents, and data available for inspection by the Department.
  6. Within 10 days of receiving a request for records, documents, and data, participants may file a written objection. If the SAC grants the objection, the participant is not obligated to provide the requested data. Objections will be granted only if sharing the data would pose a significant threat to privacy or security. An objection should reference the specific requested data that would pose a significant threat to privacy and security.
  7. Each year after the establishment of the Combatting Online Violence Sandbox, the SAC shall provide a written annual report to the General Assembly that includes the following information:
    1. each product or feature tested by each Combatting Online Violence Sandbox participant;
    2. the impact of the participants’ tested products and features on online violence;
    3. the impact of the participants’ products and features on online violence and on harms to safety, privacy, competitiveness, and innovation; harms to freedom of expression; and impact on vulnerable communities, including people of color and rural communities;
    4. the effectiveness of the Combatting Online Violence Sandbox.
    5. recommendations for lawmakers on public policy that will combat online violence.

Section 9. Sunset

The Combatting Online Violence Sandbox shall expire four years after it is enacted, unless an extension of the sandbox is enacted for a finite period of time.

A State Regulatory Sandbox on User Choice in Content Moderation

Governments should consider creating regulatory sandboxes that enable platforms to trial tools that give users more autonomy in selecting the content they see online. Because platforms often have limited moderation options, users may be denied the choice to see content that is barred by a platform’s terms, even if it is not barred by law, such as nudity and spam. While these limitations frustrate politicians on both sides of the aisle, Republicans have been particularly vocal about the perceived impact of moderation on conservative speech.

We propose a four-year state regulatory sandbox in which admitted platforms can offer users more granular tools for selecting the content they want to see. The regulatory sandbox would provide regulatory relief for any products tested in the sandbox, so long as the participating company follows all sandbox requirements in good faith.  A user could choose to receive more political content or, with more granular controls, more liberal or conservative content.

Because of the potential for increased distribution of harmful content, all applicants should be required to submit risk assessments and credible mitigation plans for potential risks consumers face in using these tools. Particular emphasis should be placed on the equity impacts of these product features, such as the impact on women, people of color, and rural communities.

Like the other regulatory sandboxes described here, this program would establish an audit committee to conduct oversight of the sandbox, including gathering relevant data from participants and publishing annual reports on the products tested and the performance of the sandbox to date.

Key elements of the state regulatory sandbox on user choice in content moderation:

  • Platforms may apply to the sandbox to test products and features related to user choice in content moderation features, including but not limited to, the following:
    • Content moderation tools to provide users with options to restrict access to content or downrank content.
    • Content moderation tools to provide users with options to increase the prevalence of content that is not otherwise unlawful.
    • Algorithmic content prioritization that increases or decreases visibility of content based on signals of user preferences.
    • Sharing depersonalized data with researchers related to user choice of content moderation preferences. 
  • Applicants shall be admitted to the sandbox based on the following criteria:
    • A proposal for a product or feature that provides users with choices to restrict access to content, downrank content, or increase the prevalence of content that is not otherwise unlawful.
    • A credible strategy for minimizing risks related to harmful content, polarization, safety, security, anticompetitive conduct, and other consumer harms. Particular emphasis should be placed on potential harms to vulnerable populations.
    • A commitment to abide by the other requirements of the program, including sharing relevant data with the SAC and publishing interim and final reports.
  • Admitted applicants would receive narrowly-tailored regulatory relief from the following laws of the state:
    • Unfair and Deceptive Acts and Practices laws (UDAP)
    • Privacy and data breach laws
    • Antitrust laws
    • Child safety laws
    • Content moderation and publisher liability laws
  • Performance of the sandbox will be monitored and evaluated by a Sandbox Audit Committee (SAC), which will have responsibility for overseeing the implementation of the program and participants’ compliance with its terms. To enable this oversight, all participants must commit to best efforts to share data with the SAC.
    • The SAC shall be composed of experts in content moderation and intermediary liability, online expression, online safety, and technology.
    • The SAC shall request data that will enable it to assess the efficacy of the platform interventions tested in the sandbox, as well as potential social harms from these interventions, including harms to user safety, competitiveness, and innovation. The SAC shall also evaluate the impact to freedom of expression and the impact on vulnerable communities.
    • Participants may decline to share requested data where necessary to protect privacy or to comply with other laws.
    • The SAC shall publish an annual report within 360 days of the enactment of this law, and every year thereafter. The report shall assess the efficacy of the platforms’ interventions, social harms from the interventions, and the performance of the sandbox itself.
    • Each participant shall also publish an interim report no more than 6 months after becoming a sandbox participant and a final report no more than 6 months after the conclusion of participation in the sandbox.
  • The sandbox shall sunset four years after its enactment. The legislature and governor may agree to a finite extension of the sandbox prior to its sunset. 

Click the arrow below to view model legislation:

CHOOSE YOUR CONTENT MODERATION ACT

Purpose: This proposal allows online platforms to test products and features that expand user choice in content moderation.

Section 1. Definitions.

  1. “Applicable agency” means a Department or agency of the state that by law regulates certain types of technology related business activity in the state and persons engaged in such technology related business activity.
  2. “Applicant” means an individual or entity that is applying to participate in the Choose Your Content Moderation Sandbox.
  3. “Consumer” means a person who uses a platform or technology offered by a specific sandbox participant.
  4. “Department” means the Department of Commerce that is responsible for overseeing the Choose Your Content Moderation Sandbox.

Section 2. Application Process.

  1. The Department of Commerce creates the Choose Your Content Moderation Sandbox.
  2. In administering the Choose Your Content Moderation Sandbox, the Department:
    1. Shall consult with each applicable agency;
    2. Establish a program to enable an online platform operating in the state to test products and features related to user choice in content moderation features; and
    3. May enter into agreements with or adopt the best practices of the Department of Justice, the Federal Trade Commission, or other states that are administering similar programs.
  3. An applicant for the Choose Your Content Moderation Sandbox shall provide to the Department an application in a form prescribed by the Department that:
    1. Includes a nonrefundable application fee of $1000 to support the administration of the program.
    2. Demonstrates the applicant is subject to the jurisdiction of the state.
    3. Demonstrates the applicant has established a physical or virtual location that is adequately accessible to the Department, where all required records, documents, and data will be maintained.
    4. Demonstrates that the applicant has the necessary personnel, technical expertise, and plan to monitor and assess the product or feature.
    5. Contains a description of the user choice product or feature to be tested, including statements regarding the following:
      1. How the product or feature will promote user choice in content moderation, including interventions such as the following:
        1. Content moderation tools to provide users with options to restrict access to content or downrank content.
        2. Content moderation tools to provide users with options to increase the prevalence of content that is not otherwise unlawful.
        3. Algorithmic content prioritization that increases or decreases visibility of content based on signals of user preferences.
        4. Sharing depersonalized data with researchers related to user choice of content moderation preferences.
      2. What harms the product or feature may impose on consumers, including harms to safety, privacy, competitiveness, and innovation; harms to freedom of expression; and harms to vulnerable communities, including people of color, women, and children;
      3. How the applicant will mitigate these risks during the sandbox;
      4. How participating in the Choose Your Content Moderation Sandbox would enable a successful test of the user choice product or feature;
      5. What data the participant will track during the test, and how the participant will mitigate any related privacy risks so as to facilitate data sharing with the Sandbox Audit Committee;
      6. How the applicant will end the test, evaluate whether the test was successful, and protect consumers if the test fails.
  4. An applicant shall file a separate application for each product or feature the applicant wants to test.
  5. Before approving the application, the Department may seek any additional information from the applicant that the Department determines is necessary,
  6. Not later than 60 days after the day on which a complete application is received by the Department, the Department shall inform the applicant as to whether the application is approved for entry into the Choose Your Content Moderation Sandbox.
  7. The Department and an applicant may mutually agree to extend the 60-day timeline as described in Subsection (6) for the Department to determine whether an application is approved for entry into the Choose Your Content Moderation Sandbox
  8. In reviewing an application under this section:
    1. The Department shall consult with, and get approval from, each applicable agency before admitting an applicant into the sandbox. The consultation with an applicable agency may include seeking information about whether:
      1. The applicable agency has previously investigated, sanctioned, or pursued legal action against the applicant,
      2. Whether certain regulations should not be waived even if the applicant is accepted into the Choose Your Content Moderation Sandbox.
    2. The Department shall identify how the applicant’s proposed product or feature is subject to licensing or other authorization requirements outside of the Choose Your Content Moderation Sandbox.
  9. In reviewing an application under this section, the Department shall consider whether a competitor to the applicant is or has been a sandbox participant and weigh that as a factor in allowing the applicant to also become a Choose Your Content Moderation Sandbox participant.
  10. If the Department and each applicable agency approve admitting an applicant into the Choose Your Content Moderation Sandbox, an applicant may become a Choose Your Content Moderation Sandbox participant.
  11. The Department may deny any application submitted under this section, for any reason, at the Department’s discretion.
  12. If the Department denies an application submitted under this section, the Department shall provide to the applicant a written description of the reasons for the denial.

Section 3. Regulatory Relief.

  1. If the Department approves an applicant to test a user choice product or feature in the Choose Your Content Moderation Sandbox, then the applicant shall be granted regulatory relief from any enforcement action under the state’s antitrust, consumer protection, content moderation or publisher liability, or privacy laws for any product or feature that is tested in the Choose Your Content Moderation Sandbox.
  2. The scope of the regulatory relief shall be narrowly tailored to specific products tested in the Choose Your Content Moderation Sandbox. Applicants receive no regulatory relief from products, services, programs or conduct outside of the sandbox.
  3. The Department shall provide applicable agencies with a detailed description of the regulatory relief offered to Choose Your Content Moderation Sandbox participants.
  4. Applicable agencies may provide recommendations to the Department on participation should be terminated for individual participants, or whether subsequent applications should be denied.
  5. The Department may propose potential reciprocity agreements between states that use or are proposing to use similar regulatory sandbox programs.

Section 4. Test Period.

  1. If the Department approves an application, the Choose Your Content Moderation Sandbox participant has 12 months after the day on which the application was approved to test the product or feature described in the participant’s application.
  2. This section does not restrict a Choose Your Content Moderation Sandbox participant who holds a license or other authorization in another jurisdiction from acting in accordance with that license or other authorization.
  3. By written notice, the Department may end a Choose Your Content Moderation Sandbox participant’s participation at any time and for any reason, including if the Department determines the participant is not operating in good faith to bring the proposed product or feature to market or share relevant data with the Sandbox Audit Committee.
  4. The Department and the Department’s employees are not liable for any business losses or the recouping of application expenses related to the Choose Your Content Moderation Sandbox including for;
    1. Denying an applicant’s application to participate in the Choose Your Content Moderation Sandbox for any reason; or
    2. Ending a Choose Your Content Moderation Sandbox participant’s participation in the sandbox at any time and for any reason.
  5. No guaranty association in the state may be held liable for business losses or liabilities incurred as a result of activities undertaken by a participant in the Choose Your Content Moderation Sandbox.

Section 5. Consumer Transparency.

  1. Before providing a product or feature in the state as part of the Choose Your Content Moderation Sandbox, a participant shall disclose the following:
    1. That the product or feature is authorized to be tested pursuant to the Choose Your Content Moderation Sandbox;
    2. That the product or feature is undergoing testing and therefore may expose the customer to risk;
    3. That the provider of the product or feature receives regulatory relief from enforcement under the state's consumer protection, antitrust, and privacy laws, but is otherwise not immune from other state laws, federal enforcement, enforcement from other states, or civil liability for any losses or damages caused by the product or feature.
    4. That the state does not endorse or recommend the product or feature that is being tested;
    5. That the product or feature is a temporary test that may be discontinued at the end of the testing period;
    6. The expected end date of the testing period; and
    7. That a consumer may contact the Department to file a complaint regarding the protect or feature being tested and provide the Department’s contact information.
  2. The disclosures required by Subsection (1) shall be provided to a consumer in a clear and conspicuous form. A statement in a participant’s Terms of Service or in a publicly available blog is clear and conspicuous.

Section 6. Exiting the Choose Your Content Moderation Sandbox.

  1. At least 30 days before the end of the 12-month Choose Your Content Moderation Sandbox testing period, a participant shall:
    1. Notify the Department that the participant will exit the sandbox, or
    2. Seek an extension in accordance with the section below.
  2. Subject to Subsection (3), if the Department does not receive notification as required by Subsection (1), the Choose Your Content Moderation Sandbox testing period ends at the end of the 12-month testing period. At the conclusion of the testing period, the participant will no longer receive regulatory relief for the product or feature.

Section 7. Extension.

  1. Not later than 30 days before the end of the 12-month Choose Your Content Moderation Sandbox testing period, a participant may request an extension of the sandbox testing period of up to 12 months.
  2. The Department shall grant or deny a request for an extension in accordance with Subsection (1) by the end of the 12-month Choose Your Content Moderation Sandbox testing period.

Section 8. Reporting Requirements and the Sandbox Audit Committee.

  1. A Choose Your Content Moderation Sandbox participant shall retain records, documents, and data produced in the ordinary course of business regarding a product or feature tested in the sandbox.
  2. The Department shall establish a Sandbox Audit Committee (SAC) to monitor performance of the sandbox.
  3. The SAC shall be composed of 11 representatives, including at least two participants from the legislature, executive branch agencies, academia, industry, and technology nonprofit organizations.
  4. A Choose Your Content Moderation Sandbox participant shall submit an interim report to the SAC within 12 months of starting the sandbox. Participants shall also submit a final report to the Subcommittee within six months of concluding the sandbox. Each report shall include the following information:
    1. the impact of the participants’ tested products and features on user choice in content moderation;
    2. the impact of the participant’s tested products and features on safety, privacy, competitiveness, innovation, and freedom of expression. This impact assessment should include an assessment of the impact on vulnerable communities, including people of color and rural communities;
    3. the effectiveness of the Choose Your Content Moderation Sandbox, including the feasibility of accessing and using the sandbox; and
    4. recommendations for lawmakers on public policy that will improve user choice in content moderation.
  5. The SAC may request additional records, documents, and data from a Choose Your Content Moderation Sandbox participant that will not be disclosed publicly, and, upon the SAC’s request, a participant shall make such records, documents, and data available for inspection by the Department.
  6. Within 10 days of receiving a request for records, documents, and data, participants may file a written objection. If the SAC grants the objection, the participant is not obligated to provide the requested data. Objections will be granted only if sharing the data would pose a significant threat to privacy or security. An objection should reference the specific requested data that would pose a significant threat to privacy and security.
  7. Each year after the establishment of the Choose Your Content Moderation Sandbox, the SAC shall provide a written annual report to the General Assembly that includes the following information:
    1. each product or feature tested by each Choose Your Content Moderation Sandbox participant;
    2. the impact of the participants’ tested products and features on user choice in content moderation;
    3. the impact of those products and features on social harms related to these interventions, including harms to user safety, privacy, competitiveness, and innovation; harms to freedom of expression; and impact on vulnerable communities, including people of color and rural populations;
    4. the effectiveness of the Choose Your Content Moderation Sandbox; and
    5. recommendations for lawmakers on public policy that will improve user choice in content moderation.

Section 9. Sunset.

The Choose Your Content Moderation Sandbox shall expire four years after it is enacted, unless an extension of the sandbox is enacted for a finite period of time.

A Policy Experiment on Age Assurance Certification 

There is broad consensus that tech platforms should avoid harm to children and that federal and state public policy should establish strong protections for children online. Despite this consensus, experts disagree on the best strategies for making the internet safer and healthier for children. Many of these debates are rooted in disagreements about whether children-specific policy proposals will help or hurt the children they intend to protect.

As more companies choose or are compelled to revise their age verification or assurance processes, they face significant challenges. Companies must either balance the tradeoffs inherent in age assurance themselves or choose a vendor to enact new assurance processes. Not only is there little guidance for platforms on how best to implement age assurance programs, but it can be difficult to ensure that vendors are employing industry best-practices.

A voluntary government-run certification program for both first- and third-party assurance programs could help address some of these challenges. For this policy experiment, a working group would create and then test a set of certification standards that ensure companies are minimizing privacy concerns, storing and processing data securely, and addressing equity concerns. The working group could be situated within the FTC or as part of a state consumer protection agency. Receiving a certification could signal that vendors are using safe and reliable practices. Working to follow the certification guidelines could also help a company navigate a difficult set of tradeoffs.

A policy experiment would allow the government to develop and test the certification program before deploying it more widely. The experiment would also supply useful data about age assurance operations, best practices, and compliance challenges.

Key elements of the state policy experiment

  • A Working Group of the state Department of Commerce will establish an experimental voluntary certification program for both first and third-party age assurance systems.
  • Before accepting applications for certification, the Working Group will publish requirements and guidelines for certification.
  • When they apply to participate in the experiment, platforms or vendors will submit
    • A detailed description of the age assurance product or program.
    • A credible strategy for minimizing risks related to privacy violations and user surveillance, expression, anticompetitive conduct, and consumer harm and minimizing risks to children. Particular emphasis should be placed on potential harms to children of color, children of families that fall below the poverty line, children with limited or no internet access, and children who live in rural locations.
    • A commitment to abide by the other requirements of the program, including sharing relevant data with the Working Group and publishing interim and final reports.
  • If denied certification, applicants will receive detailed feedback about how to attain certification, and may resubmit in 30 days.
  • Performance of the experiment will be monitored and evaluated by the Working Group, which will have responsibility for overseeing the implementation of the program and participants’ compliance with its terms. To enable this oversight, all participants must commit to best efforts to share data with the Working Group.
    • The Working Group shall be composed of experts in online safety, mental health, children’s rights, content moderation, privacy, and technology.
    • The Working Group may request anonymized data that enables it to make this assessment. Participants may decline to share requested data where necessary to protect privacy or to comply with other laws.
    • The Working Group shall publish an annual report within 360 days of the enactment of this law, and every year thereafter. The report shall assess the successes and limitations of the certification program, and the efficacy of the platforms’ interventions in establishing users’ age, minimizing harm to children, and maximizing benefits to children. The report shall also include recommendations for lawmakers on public policy that will improve age Assurance and age-appropriate design processes.
  • The experiment shall sunset four years after its enactment. The legislature and governor may vote to a finite extension of the experiment prior to its sunset.

Click the arrow below to view model legislation:

AGE ASSURANCE CERTIFICATION EXPIREMENT ACT

Purpose: This proposal establishes a temporary experimental certification program for first and third-party age assurance providers.

Section 1. Definitions.

  1. “Applicable agency” means a Department or agency of the state that by law regulates certain types of technology related business activity in the state and persons engaged in such technology related business activity.
  2. “Applicant” means an individual or entity that is applying to participate in the Age Assurance Certification Experiment.
  3. “Consumer” means a person who uses a platform or technology offered by a specific experiment participant.
  4. “Working Group” means the Working Group of the state trade or commerce Department that is responsible for overseeing the Age Assurance Certification Experiment Program.

Section 2. Age Assurance Experiment Program.

  1. This act establishes a Working Group of the state applicable agency that will create the Age Assurance Design Experiment.
  2. The Working Group shall be composed of 11 representatives, including at least two participants from the legislature, executive branch agencies, academia, industry, and technology nonprofit organizations. The Working Group shall include expertise in the following fields: online safety, mental health, children’s rights, content moderation, privacy, and technology.
  3. In administering the Age Assurance Design Experiment, the Working Group:
    1. Shall consult with each applicable state agency;
    2. Establish a program to assess and rate age assurance systems according to the standardized set of criteria;
    3. Establish, update, and publish assessment criteria based on industry best practices and academic research. Assessment criteria should include accuracy metrics, privacy protections, data security, equity and discrimination considerations; and
    4. May enter into agreements with or adopt the best practices of the Department of Justice, the Federal Trade Commission, or other states that are administering similar programs.

Section 3. Application Process.

  1. An applicant for the Age Assurance Certification Experiment will provide to the Working Group an application in a form prescribed by the Working Group that:
    1. Includes a nonrefundable application fee of $1000 to support the administration of the experiment.
    2. Demonstrates the applicant is subject to the jurisdiction of the state.Demonstrates the applicant has established a physical or virtual location that is adequately accessible to the Working Group, where all required records, documents, and data will be maintained.
    3. Contains a description of the age assurance product, service, or program to be assessed, including statements regarding the following:
      1. The age assurance techniques that will be tested, such as facial recognition, analysis of hard identifiers, or certification by third parties.
      2. Risks those techniques may impose on children or adults, including harms to safety, privacy, competitiveness, and innovation; harms to freedom of expression; and harms to vulnerable communities, including people of color, children of families that fall below the poverty line, children with limited or no internet access, and children who live in rural locations.
      3. Methods of mitigating or reducing those risks, including privacy, data security, or equity protections. Applicants should also specify relevant corporate policies and enforcement procedures that help to mitigate those risks.
      4. Data protection and handling processes.
      5. What data the participant will track during the experiment, and how the participant will mitigate any related privacy risks so as to facilitate data sharing with the Working Group.
      6. How the applicant will end the experiment, evaluate whether the experiment was successful, and mitigate consumer harms if the test fails.
  2. Before approving the application, the Working Group may seek any additional information from the applicant that the Department determines is necessary,
  3. Not later than 60 days after the day on which a complete application is received by the Department, the Working Group shall inform the applicant as to whether the application is approved for entry into the Age Assurance Certification Experiment and the program granted certification.
    1. Participants may appeal the decision within 14-days only on grounds of factual errors.
    2. Participants may resubmit for certification after 30-days from denial. The resubmission application should address issues raised in denial and provide an operation plan for implementation of revisions.
  4. The Working Group and an applicant may mutually agree to extend the 60-day timeline as described in Subsection (6) for the Working Group to determine whether an application is approved for entry into the Age Assurance Certification Experiment.
  5. In reviewing an application under this section:
    1. The Department shall consult with, and get approval from, each applicable agency before admitting an applicant into the Experiment. The consultation with an applicable agency may include seeking information about whether the applicable agency has previously investigated, sanctioned, or pursued legal action against the applicant,
    2. The Working Group shall identify how the applicant’s age assurance product, service, or program is subject to licensing or other authorization requirements outside of the Age Assurance Certification Experiment.
  6. In reviewing an application under this section, the Working Group shall consider whether a competitor to the applicant is or has been an experiment participant and weigh that as a factor in allowing the applicant to also become an Age Assurance Certification Experiment Participant.
  7. If the Working Group and each applicable agency approve admitting an applicant into the Age Assurance Certification Experiment, an applicant may become an Age Assurance Certification Experiment Participant.
  8. The Working Group may deny any application submitted under this section, for any reason, at the Working Group’s discretion.
  9. If the Working Group denies an application submitted under this section, the Working Group shall provide to the applicant a written description of the reasons for the denial as a Age Assurance Certification Experiment participant.
  10. By written notice, the Working Group may end an Age Assurance Certification Experiment participant’s participation in the Experiment at any time and for any reason, including if the Working Group determines the participant is not operating in good faith to bring an age assurance product, service, or program to market or to share relevant data with the Working Group. The Working Group may also end a participant’s participation if it determines that the test is causing harm to children.
  11. The Working Group and the Working Group’s employees are not liable for any business losses or the recouping of application expenses related to the Age Assurance Certification Experiment including for;
    1. Denying an applicant’s application to participate in the Age Assurance Certification Experiment for any reason; or
    2. Ending an Age Assurance Certification Experiment participant’s participation in the experiment at any time and for any reason.
  12. No guaranty association in the state may be held liable for business losses or liabilities incurred as a result of activities undertaken by a participant in the Age Assurance Certification Experiment.

Section 4. Consumer Protection.

  1. Before providing an age assurance product, service, or program in the state, an Experiment participant shall disclose the following;
    1. That the age assurance product, service, or program is authorized pursuant to the Age Assurance Certification Experiment;
    2. That the innovative age assurance product, service, or program is undergoing testing and therefore may expose the customer to risk;
    3. That the age assurance product, service, or program is a temporary program that may be discontinued at the end of the testing period;
    4. The expected end date of the experimental period; and
    5. That a consumer may contact the Working Group to file a complaint regarding the age assurance product or service being tested and provide the Department’s contact information.
  2. The disclosures required by Subsection (1) shall be provided to a consumer in a clear and conspicuous form and shall make a good faith effort to provide these disclosures to the parents of minor users. A statement in a participant’s Terms of Service or in a publicly available blog is clear and conspicuous.

Section 5. Exiting or Renewing the Age Assurance Certification Experiment.

  1. At least 30 days before the end of the Age Assurance Certification Experiment period, a participant may:
    1. Notify the Working Group that the experiment participant will exit the experiment, or
    2. Seek a renewal of program testing period.

Section 6. Reporting Requirements and the Experiment Working Group.

  1. An Age Assurance Certification Experiment participant shall retain records, documents, and data produced in the ordinary course of business regarding an age assurance product, service, or program tested in the experiment.
  2. An Age Assurance Certification Experiment participant shall submit an interim report to the Working Group within 12 months of starting the experiment. Participants shall also submit a final report to the Working Group within six months of concluding the experiment. The reports shall include data on user or parent complaints, data on the efficacy of age-assurance processes, and information about the impact of the test on the welfare of children. The report will also include an assessment of the Working Group’s admittance standards, including recommendations for revisions.
  3. The Working Group may request additional records, documents, and data from an Age Assurance Certification Experiment participant that will not be disclosed publicly, and, upon the Working Group’s request, a participant shall make such records, documents, and data available for inspection by the Working Group.
  4. Within 10 days of receiving a request for records, documents, and data, participants may file a written objection. If the Working Group grants the objection, the participant is not obligated to provide the requested data. Objections will be granted only if sharing the data would pose a significant threat to privacy or security. An objection should reference the specific requested data that would pose a significant threat to privacy and security.
  5. By October 1 each year, the Working Group shall provide a written annual report to the General Assembly that includes the following information:
    1. each product, service, or program tested by each Age Assurance Certification Experiment participant
    2. the impact of the participants’ age assurance products, services, and programs on children’s welfare, including harms to user safety, privacy, parental involvement in the lives of their children; competitiveness; innovation; freedom of expression; and vulnerable communities, including children of color, children of families that fall below the poverty line, children with limited or no internet access, and children who live in rural locations.
    3. the effectiveness of the Age Assurance Certification Experiment program.
    4. recommendations for lawmakers on public policy that will improve age assurance design processes.

Section 7. Sunset.

The Age Assurance Certification Experiment shall expire four years after it is enacted, unless an extension of the sandbox is enacted for a finite period of time.

A Policy Product Trial (PPT) Experiment on Liability for Generative Artificial Intelligence

    Generative artificial intelligence (GAI) has captivated public, press, and policymaker attention. There has been rampant speculation about the potential harms of the technology, as well as speculation about who is liable when GAI plays a role in generating unlawful speech. GAI technologies are rapidly gaining users and revenue, but on questions of liability, the law remains unclear.

    This legal landscape provides a ripe environment for exploring whether a new intermediary liability regime for GAI could strike a balance between empowering companies to innovate and protecting consumers and society from harmful uses of the technology.

    A product policy trial (PPT) experiment to provide limited intermediary liability protections for GAI technologies could help shed light on these issues. In this test, a GAI platform would receive Section 230-style protections for content that its model produces in response to a user prompt, unless a plaintiff could show that the platform was solely responsible for creating the content.

    Companies participating in this PPT experiment must agree to oversight by an audit committee and must commit to release relevant data to that committee. Participants would produce interim and final reports for the committee.

    The committee would also publish reports. In our other proposed PPT experiments, the audit committee publishes annual reports on the performance of the experiment. But because of the sensitivities surrounding GAI technology, we suggest here that the committee publishes two reports each year so as to more closely monitor and report on potential harms.

    These reports would include a review of specific costs and benefits associated with the tests, including their impacts on vulnerable populations. Reports should also include recommendations for lawmakers on intermediary liability frameworks that will maximize the benefits and minimize the risks of GAI technologies.

    An experiment in this area could also shed light on how to conduct audits of GAI technologies. To capitalize on that opportunity, the annual reports should identify challenges the committee might have faced in carrying out its auditing responsibilities, the strategies it used to address those challenges, and further recommendations for conducting GAI audits.

    Key elements of the state policy product trial (PPT) experiment on liability for generative artificial intelligence

    • Platforms may apply to the experiment to test generative artificial intelligence (GAI) products and features, including but not limited to, the following:
      • Chatbots
      • Search engine optimizations
      • Content moderation tools
    • Applicants shall be admitted to the experiment based on the following criteria:
      • A proposal for a GAI product or feature.
      • A credible strategy for minimizing risks from harmful content, censorship, privacy violations, security breaches, anticompetitive conduct, and other consumer harms. Particular emphasis should be placed on potential harms to vulnerable populations, including people of color and rural populations.
      • A commitment to abide by the other requirements of the program, including sharing relevant data with the PPT Audit Committee.
    • Admitted applicants would receive narrowly-tailored regulatory relief from the following state laws:
      • Publisher liability, including defamation
      • Unfair and Deceptive Acts and Practices laws (UDAP)
      • Privacy and data breach laws
      • Antitrust laws
    • Performance of the experiment will be monitored and evaluated by a PPT Audit Committee, which will have responsibility for overseeing the implementation of the program and participants’ compliance with its terms. To enable this oversight, all experiment participants must commit to best efforts to share data with the PPT Audit Committee (PAC).
      • The PAC shall be comprised of experts in privacy, intermediary liability, intellectual property, online safety, content moderation, privacy, and innovation.
      • The PAC shall assess the costs and benefits of the products tested in the experiment. In particular, the PAC shall report on benefits and harms to user safety, security, privacy, expression, competitiveness, and innovation.
      • The PAC shall evaluate harms of tested products and features to vulnerable communities.
      • The PAC may request anonymized data that enables it to make this assessment. Participants may decline to share requested data where necessary to protect privacy, proprietary technologies, or to comply with other laws.
      • The PAC shall publish an annual report within 360 days of the enactment of this law, and every year thereafter. The report shall assess the costs and benefits of the products tested in the experiment. It shall include an analysis of the harm of tested products to vulnerable communities, including children. The report shall also include recommendations for lawmakers on public policy that will maximize the benefits and minimize the costs of generative AI technologies in the future.
      • Each participant shall also publish an interim report no more than 6 months after becoming an experiment participant and a final report no more than 6 months after the conclusion of participation in the experiment.
    • The experiment shall sunset four years after its enactment. The legislature and governor may agree to a finite extension of the experiment prior to its sunset.

    Click the arrow below to view model legislation:

    GENERATIVE ARTIFICIAL INTELLIGENCE LIABILITY EXPERIMENT ACT

    Purpose: This proposal allows generative artificial intelligence (GAI) platforms, under the observation of regulators, to trial new products and features without incurring traditional publisher liability for the speech of their users, unless a platform was solely responsible for creating the content. This experiment will test mechanisms for auditing GAI technologies.

    Section 1. Definitions.

    1. “Applicable agency” means a Department or agency of the state that by law regulates certain types of technology related business activity in the state and persons engaged in such technology related business activity.
    2. “Applicant” means an individual or entity that is applying to participate in the GAI Liability Experiment.
    3. “Consumer” means a person who uses a platform or technology offered by a specific regulatory experiment participant.
    4. “Department” means the Department of Commerce that is responsible for overseeing the GAI Liability Experiment program
    5. “GAI product or feature” means a product or feature that uses generative artificial intelligence technology, including chatbots, search engine optimizations, and content moderation tools.

    Section 2. Application Process.

    1. The Department of Commerce creates the GAI Liability Experiment program.
    2. In administering the GAI Liability Experiment, the Department:
      1. Shall consult with each applicable agency;
      2. Establish a program to enable an online platform operating in the state to test a GAI product or feature; and
      3. May enter into agreements with or follow the best practices of the Department of justice, Federal Trade Commission, or other states that are administering similar programs.
    3. An applicant for the GAI Liability Experiment shall provide to the Department an application in a form prescribed by the Department that:
      1. Includes a nonrefundable application fee of $1000 to support the administration of the program.
      2. Demonstrates the applicant is subject to the jurisdiction of the state.
      3. Demonstrates the applicant has established a physical or virtual location that is adequately accessible to the Department, where all required records, documents, and data will be maintained.
      4. Demonstrates that the applicant has the necessary personnel, technical expertise, and plan to monitor and assess the GAI product or feature.
      5. Contains a description of the GAI product or feature to be tested, including statements regarding the following:
        1. What harms the GAI product or feature may create, including harms from harmful content, censorship, privacy violations, security breaches, and anticompetitive conduct. Particular emphasis should be placed on potential harms to vulnerable populations, including people of color and rural populations.
        2. How the applicant will mitigate these risks when it offers the GAI product or feature;
        3. How participating in the GAI Liability Experiment would enable a successful test of the GAI product or feature;
        4. What data the participant will track during the test, and how the participant will mitigate any related privacy risks so as to facilitate data sharing with the PPT Audit Committee;
        5. How the applicant will end the test, evaluate whether the test was successful, and protect consumers if the test fails.
    4. An applicant shall file a separate application for each GAI product or feature the applicant wants to test.
    5. An application is filed and before approving the application, the Department may seek any additional information from the applicant that the Department determines is necessary,
    6. Subject to subsection (7), not later than 60 days after the day on which a complete application is received by the Department, the Department shall inform the applicant as to whether the application is approved for entry into the GAI Liability Experiment.
    7. The Department and an applicant may mutually agree to extend the 60-day timeline as described in Subsection (6) for the Department to determine whether an application is approved for entry into the GAI Liability Experiment.
    8. In reviewing an application under this section:
      1. The Department shall consult with, and get approval from, each applicable agency before admitting an applicant into the experiment. The consultation with an applicable agency may include seeking information about whether:
        1. The applicable agency has previously investigated, sanctioned, or pursued legal action against the applicant,
        2. Whether certain regulations should not be waived even if the applicant is accepted into the GAI Liability Experiment.
      2. The Department shall identify how the applicant’s GAI product or feature is subject to licensing or other authorization requirements outside of the GAI Liability Experiment, including a specific list of all state laws, regulations, and licensing or other requirements that the applicant is seeking to have waived during the testing period.
    9. In reviewing an application under this section, the Department shall consider whether a competitor to the applicant is or has been a experiment participant and weigh that as a factor in allowing the applicant to also become an GAI Liability Experiment participant.
    10. If the Department and each applicable agency approve admitting an applicant into the GAI Liability Experiment, an applicant may become a GAI Liability Experiment participant.
    11. The Department may deny any application submitted under this section, for any reason, at the Department’s discretion.
    12. If the Department denies an application submitted under this section, the Department shall provide to the applicant a written description of the reasons for the denial as a GAI Liability Experiment participant.

    Section 3. Regulatory Relief.

    1. If the Department approves an applicant to test a GAI product or feature in the GAI Liability Experiment, then the applicant shall be granted regulatory relief from any enforcement action under the state’s publisher liability, consumer protection, privacy laws, or antitrust laws for any GAI product or feature that is tested in the GAI Liability Experiment.
    2. The scope of the regulatory relief shall be narrowly tailored to specific products tested in the GAI Liability Experiment. Applicants shall receive no regulatory relief from products, services, programs or conduct outside of the experiment.
    3. The Department shall provide applicable agencies with a detailed description of the regulatory relief offered to GAI Liability Experiment participants.
    4. Applicable agencies may provide recommendations to the Department on whether access to the experiment should be terminated for individual participants, or whether subsequent applications should be denied.
    5. The Department may propose potential reciprocity agreements between states that use or are proposing to use similar regulatory sandbox programs.

    Section 4. Test Period.

    1. If the Department approves an application, the GAI Liability Experiment participant has 12 months after the day on which the application was approved to test the GAI product or feature described in the GAI Liability Experiment participant’s application.
    2. This section does not restrict a GAI Liability Experiment participant who holds a license or other authorization in another jurisdiction from acting in accordance with that license or other authorization.
    3. A GAI Liability Experiment participant is deemed to possess an appropriate license under the laws of the state for the purposes of any provision of federal law requiring state licensure or authorization.
    4. By written notice, the Department may end a GAI Liability Experiment participant’s participation in the experiment at any time and for any reason, including if the Department determines the participant is not operating in good faith to bring a GAI product or feature to market or to share relevant data with the PPT Audit Committee. The Department may also end a participant’s participation if it determines that the test is causing undue harm.
    5. The Department and the Department’s employees are not liable for any business losses or the recouping of application expenses related to the GAI Liability Experiment including for;
      1. Denying an applicant’s application to participate in the GAI Liability Experiment for any reason; or
      2. Ending a GAI Liability Experiment participant’s participation in the experiment at any time and for any reason.
    6. No guaranty association in the state may be held liable for business losses or liabilities incurred as a result of activities undertaken by a participant in the GAI Liability Experiment.

    Section 5. Consumer Transparency.

    1. Before providing a GAI product or feature in the state as part of the GAI Liability Experiment, a participant shall disclose the following:
      1. That the GAI product or feature is authorized pursuant to the GAI Liability Experiment;
      2. That the GAI product or feature is undergoing testing and therefore may expose the customer to risk;
      3. That the provider of the GAI product or feature receives regulatory relief from enforcement under the state’s publisher liability, consumer protection, privacy laws, and antitrust laws, but is otherwise not immune from federal enforcement, enforcement from other states, or civil liability for any losses or damages caused by the GAI product or feature.
      4. That the state does not endorse or recommend the GAI product or feature;
      5. That the GAI product or feature is a temporary test that may be discontinued at the end of the testing period;
      6. The expected end date of the testing period; and
      7. That a consumer may contact the Department to file a complaint regarding the product or feature being tested and provide the Department’s contact information.
    2. The disclosures required by Subsection (1) shall be provided to a consumer in a clear and conspicuous form and shall make a good faith effort to provide these disclosures to the parents of minor users. A statement in a participant’s Terms of Service or in a publicly available blog is clear and conspicuous.

    Section 6. Exiting the GAI Liability Experiment.

    1. At least 30 days before the end of the 12-month GAI Liability Experiment testing period, a participant shall:
      1. Notify the Department that the experiment participant will exit the regulatory experiment, or
      2. Seek an extension in accordance with the section below.
    2. Subject to Subsection (3), if the Department does not receive notification as required by Subsection (1), the GAI Liability Experiment testing period ends at the end of the 12-month testing period. At the conclusion of the testing period, the participant will no longer receive regulatory relief for the GAI product or feature.

    Section 7. Extension.

    1. Not later than 30 days before the end of the 12-month GAI Liability Experiment testing period, a participant may request an extension of the experiment testing period of up to 12 months.
    2. The Department shall grant or deny a request for an extension in accordance with Subsection (1) by the end of the 12-month GAI Liability Experiment testing period.

    Section 8. Reporting Requirements and the PPT Audit Committee.

    1. A GAI Liability Experiment participant shall retain records, documents, and data produced in the ordinary course of business regarding a GAI product or feature tested in the experiment.
    2. The Department shall establish a PPT Audit Committee (PAC) to monitor performance of the experiment.
    3. The PAC shall be composed of 11 representatives, including at least two participants from the legislature, executive branch agencies, academia, industry, and technology nonprofit organizations. The PAC shall include expertise in the following fields: online safety, intermediary liability and content moderation, privacy, and technology.
    4. A GAI Liability Experiment participant shall submit an interim report to the PAC within 12 months of starting the experiment. Participants shall also submit a final report to the PAC within six months of concluding the sandbox. Each report shall include the following information:
      1. the impact of the participant’s tested products and features on safety, privacy, competitiveness, innovation and freedom of expression. The impact assessment should include an assessment of the impact on vulnerable communities including people of color, people who fall below the poverty line, people with limited or no internet access, and people who live in rural locations.
      2. the effectiveness of the GAI Liability Experiment, including the feasibility of accessing and using the experiment.
      3. recommendations for lawmakers on public policy that will maximize the potential benefits of GAI technologies and minimize potential harms.
    5. The PAC may request additional records, documents, and data from a GAI Liability Experiment participant that will not be disclosed publicly, and, upon the PAC’s request, a participant shall make such records, documents, and data available for inspection by the Department.
    6. Within 10 days of receiving a request for records, documents, and data, participants may file a written objection. If the PAC grants the objection, the participant is not obligated to provide the requested data. Objections will be granted only if sharing the data would pose a significant threat to privacy or security. An objection should reference the specific requested data that would pose a significant threat to privacy and security.
    7. Each year after the establishment of the GAI Liability Experiment, the PAC shall provide a written annual report to the General Assembly that includes the following information:
      1. each product or feature tested by each GAI Liability Experiment participant;
      2. the impact of the participants’ products and features, including harms from harmful content, censorship, privacy violations, security breaches, and anticompetitive conduct. The PAC should evaluate the impact on vulnerable communities, including people of color, people who fall below the poverty line, people with limited or no internet access, and people who live in rural locations.
      3. the effectiveness of the GAI Liability Experiment program.
      4. recommendations for lawmakers on public policy that will improve intermediary liability laws for GAI technologies.

    Section 9. Sunset.

    The GAI Liability Experiment shall expire four years after it is enacted, unless an extension of the experiment is enacted for a finite period of time.