Data Science Institute
Center for Technological Responsibility, Reimagination and Redesign

Projects

The CNTR is engaged in a range of projects addressing technological responsibility from different disciplinary perspectives.

Legislative Mapping

The goal of this project is to develop a framework for evaluating state and federal AI legislative proposals. The framework will comprise a legislative scorecard as well as text analytics describing the collection of proposals. 

The findings can be useful to diverse stakeholders including policymakers, advocacy organizations, and the private sector as society seeks to better understand the maturity of the current AI regulatory environment in the United States. 

To be as transparent as possible, accompanying the report will be the Scoring Methodology and the Guide for Scorers. We intend to update our methodology regularly according to feedback from stakeholders.

Seeking Harmony in AI Governance

This project is supported in part by the Media and Democracy Fund.

Group Members

Sociotechnical Evaluation of LLMs

The goal of this effort is to develop methods for evaluating the performance of Large Language Models (LLMs), especially within sociotechnical contexts where the output of the LLM could be used to impact people’s lives. 

Group Members

Genetic Data Governance

The goal of this effort is to map the landscape of uses, risks, and harms associated with genetic data, and make recommendations for the public and the policymakers on how to govern this sensitive data. The relevance of our work is underscored by the recent 23andMe data breach, which impacted nearly 7 million users, and the growing movement in state legislatures to integrate genetic data protections into privacy laws.

Direct-to-Consumer Genetic Testing: Data Flow, Governance, and Recommendations to Mitigate Harm | Undergraduate Thesis by Amit Levi

Group Members

Evaluating ML Models

Technology Law and Policy

We undertake research on numerous questions at the intersection of technology, law, and policy. These include understanding the role of generative AI in copyright, examining how privacy-enhancing technologies might subvert data protection goals, exploring how the technical and legal discussions around data minimization are often at odds, and identifying points of commonality and difference between generative and predictive AI.

Group Members

Socially Responsible Computing (SRC) Curriculum Handbook

The CS Department’s Socially Responsible Computing (SRC) program reimagines computer science education at Brown and beyond by exposing future engineers to the social impact of modern digital technology, ethical and political challenges surrounding such technologies, as well as technical and theoretical tools to address those challenges. The program develops curricula, pedagogical approaches and instructional materials to support the inclusion of SRC in a wide variety of computer science courses. The SRC curriculum currently covers 7 overarching areas:

  1. Data protection and privacy,
  2. Automated Decision-Making: Fairness & Justice, Transparency, Reliability,
  3. Communication and public discourse,
  4. Accessibility and universal design,
  5. Digital well-being,
  6. Sustainability, and
  7. Socio-political and economic context of technology

The SRC Curriculum Handbook represents a joint effort between the SRC program and CNTR to monitor and gather interdisciplinary and multi-stakeholder content on the rapidly changing landscape of socially responsible computing and to synthesize that content into educational, digestible primers and curated lists of resources.

The handbook will serve as a curriculum guide on socially responsible computing education within Brown’s Computer Science department, geared towards teaching assistants, faculty, and students. It also has the potential to grow into a sustainable, public resource for the Brown community and beyond. 

Want to get involved? We will be recruiting new cohorts of students each semester to work on this project. The Fall ‘24 cohort application has closed. Learn about paid and credit-bearing positions for Spring '25 here.

Faculty Advisors

Team Members

Past Project: Trust Infrastructure for Grassroots Organizing (TIGRO)

Problem: Grassroots organizers work in the digital and physical world, and use social media extensively for networking and organizing. This exposes them to surveillance and disinformation campaigns, and that leads to physical violence and incarceration. 

Goal: To build cryptographic tools for grassroots organizing so that organizers can connect and trust each other without revealing info to those surveilling them from outside their community. 

Status: Cryptographic protocols have been developed: Looking for collaborators to help implement the protocols in mobile apps.

Group Members

Past Project: Precarity

This research is an interdisciplinary study of financial instability and inequity when subject to the uncertainty of automated decisions. We do this via the modeling of artificial societies - which are computer simulations or computational models designed to imitate and investigate the behavior of complex social systems.

Financial instability is a condition of the modern world. To study financial instability, we draw on the concept of “precarity.” Precarity is a term that characterizes the latent instability - precariousness - and therefore vulnerability of people's lives. It is also the manifestation of the phenomenon where one bad decision has ripple effects by directly (or indirectly) affecting future decisions. For example, repeated credit card denials result in denial of mortgage loan applications and exacerbation of other components of an individual's financial wellbeing. Our goal is to build realistic agent-based modeling frameworks that capture the behavior of individuals and the resultant ML tools' precarity.

Group Members