Data Science Institute
Center for Technological Responsibility, Reimagination and Redesign

Understanding AI Legislation: The CNTR AISLE Framework

The evolution of artificial intelligence (AI) policy has created a fragmented legislative landscape with bills emerging at both the state and national level in the United States. There have been few attempts at identifying policy elements in a bill that could inform maturity and robustness of legislation on AI systems, in a way that is useful to policymakers, media, and the public. Given that between Jan 2023 and Jan 2025 over 1000 AI-related pieces of legislation were introduced, a broad assessment framework is sorely needed. 

At the Center for Technological Responsibility, our mission is to produce action-oriented insights on topics at the intersection of technology and society, and communicate these insights in a way that transparently serves a broad audience of stakeholders. To that end, we are developing an assessment framework that will help answer the following questions about the rapidly evolving space of AI legislation. 

  • What are the main policy elements that comprise AI legislation?
  • How are legislatures balancing comprehensive versus targeted approaches to AI governance?
  • How do these approaches vary across states and over time?
  • What themes are emerging?

As befits a framework that examines AI governance proposals, our framework is transparent and allows (to the extent possible) an objective evaluation identifying components that typical bills are expected to have, and to what extent they address those components.  This framework is a work in progress and we welcome feedback for how this may be improved.

Methodology

Bills pertaining to AI and automated decision systems (ADS) all address five key policy areas, forming the basis of our structured, multi-category framework.   We applied the framework to 23 bills using “yes”/"no" questions developed and refined through literature review, expert consultation, legislative analysis, sample bill, and user testing.

The five key policy areas and example framework questions are below.

Risk identification and mitigation, lifecycle of impact assessments, documentation and transparency, auditing and compliance, precautionary measures, and licensing

Example questions

  • Does the bill require covered entities  to conduct Impact/Risk Assessment or similar evaluations?
  • Does the bill require maintenance of Impact/Risk Assessment documentation?
  • Does the bill specify or otherwise acknowledge risks arising from development by requiring testing the AI tool/system for validity?

Privacy rights, data sensitivity, collection and minimization, usage and retention, transfer and sharing, deletion, security, and data subject rights

Example questions

  • Does the bill require organizations to document the purposes for which personal data is collected, used, processed or retained?
  • Does the bill require explicit, informed consent from individuals before collecting their personal data?

 Impact and mitigation practices to prevent discriminatory outcomes

Example questions

  • Does the bill explicitly include legally protected characteristics (e.g., race, gender, age, religion, disability) in its definition of discrimination or bias?
  • Does the bill define "algorithmic discrimination" (or a similar term) to characterize unfair treatment toward specific groups?

Job displacement, upskilling initiatives, and industry-government collaboration

Example questions

  • Does the bill call for the analysis of challenges faced by workers affected by automation or AI implementation?
  • Does the bill call for the analysis of demographics that may be most vulnerable to AI displacement?

Development of new institutions for governance, an elaboration of interagency collaboration, and mechanisms for enforcement

Example questions

  • Does the bill mandate the establishment of a new entity?
  • Does the bill outline clear, measurable objectives for the new entity that must be achieved within defined timelines?

Preliminary Findings

One of the advantages of taking a holistic view driven by the framework is that we can analyze trends across bills.  

Here are some of our initial insights:

  • We found that 95% of the policies contained definitions for AI or ADS, and 83% of bills were found to be highly similar to established existing AI definitions. On the other hand, definitions of concepts related to generative AI were only found in 30% of bills. Only one bill provided a quantitative resource-based definition of generative or foundation models.
  • Many bills addressed matters regarding Accountability & TransparencyBias & Discrimination and Institution. Specifically, when we restricted bills that scored at least 50% of Y/N questions per category, we found 10 out of 23 bills that covered each of these categories. On the other hand, while many bills did have some components in Data Protection, only one had at least 50% coverage. Note that these bills are typically AI/ADS-related bills, where many states have separate bills to address data privacy specifically.  Lastly, there was a general lack of coverage for Labor Force matters across these bills.
  • We found that none of the bills analyzed have comprehensive coverage across all categories. Specifically, we counted the number of categories with high coverage, i.e. at least 50% of “Yes” per category. Amongst the 23 bills analyzed, no bill covered all 6 categories highly.
  • As Data Protection and Labor Force were not highly covered in these AI/ADS bills, we also analyzed comprehensive coverage without these categories. With the exclusion of Data Protection, we found only one bill highly scored across the remaining 5 categories. And with the exclusion of both Data Protection and Labor Force, there were 5/23 bills having high coverage for the remaining 4 categories.
  • By analyzing the number of categories highly covered by a bill, we found differences between federal and state bills. On average, it appears state bills tended to highly cover more categories than federal bills. Only 14.2% of the federal bills covered at least half of the categories, while 68.8% of the analyzed state bills covered similarly.
  • We compared the legislative texts to the model bills in two ways. The first way compared the overlap between elements that exist between a given legislative bill and model bill. In other words, we calculated the percentage of similar “Yes” items over the union of all “Yes” items between the two bills. By doing so, we found that many bills were more similar to one model bill than the others.

Conclusions

While the first version of our CNTR AISLE Framework and preliminary results provide interesting insights into proposed legislation on AI systems, we acknowledge that there are limitations. 

One key challenge is bias in bill selection. The current pool of bills does not fully represent the breadth of AI-related legislation across different categories and regions. We did not analyze bills focused on facial recognition technology and surveillance matters, but will consider including such bills for our next iteration. There are also limitations in the scope and quantity of the questions themselves.

The process itself presents restrictions; our pool of scorers is currently limited, with most participants already possessing a background in policy.  Individual biases or inconsistencies in scoring could affect the results. To address this, we hope to increase the number of scorers for future versions, ensuring that multiple reviewers can evaluate each bill to enhance reliability. We are also working on refining our platform to make it more user-friendly and ensure clear guidance to standardize scoring across different evaluators. 

Lastly, while our framework offers a structured assessment, it can’t fully capture the entire landscape of legislation, given its highly evolving nature. Future iterations of the project will aim to capture more recent bills, and we hope to create visualizations that are accessible to the public to interpret.

Next Steps

We are just at the beginning of this process to better understand legislation around AI.  We expect our process and our framework to evolve over time as both legislation and technology changes.  We hope that our work will be useful to journalists, legislators, policymakers, students, and constituents alike.  We invite your feedback and suggestions as we continue our analysis with intentions to release an updated version in early 2026.  Please feel free to reach out to us at cntr-info@brown.edu.

We acknowledge financial support from the Media and Democracy Fund for this work, and also acknowledge the generous access to legislative data provided to us by Plural Policy. We would like to thank a number of external experts who provided invaluable feedback during the early stages of developing this framework, as well as a number of scorers who helped us shape the framework and score different bills. 

While developing the framework, key team members left to pursue other opportunities. Sasa Jovanovic was one of the primary leaders of this project in its early stages of development, and continues to advise us. Rachel Kim helped develop parts of the framework before graduating. 

Project Team

  • Mahir Arora

    Undergraduate Student
  • Nora Cai

    Undergraduate Student
  • Timothy Fong

    Undergraduate Student
  • Sasa Jovanovic

    Policy Research Lead, Senior Privacy Program and Policy Analyst, Venable LLP
  • Rachel Kim

    Former Undergraduate Student
  • Tuan Pham

    Research Associate in Neuroscience
  • Fern Tantivess

    Undergraduate Student