Projects
The CNTR is engaged in a range of projects addressing technological responsibility from different disciplinary perspectives.
Projects
The CNTR is engaged in a range of projects addressing technological responsibility from different disciplinary perspectives.
Legislative Mapping
The goal of this project is to develop a framework for evaluating state and federal AI legislative proposals. The framework will comprise a legislative scorecard as well as text analytics describing the collection of proposals.
The findings can be useful to diverse stakeholders including policymakers, advocacy organizations, and the private sector as society seeks to better understand the maturity of the current AI regulatory environment in the United States.
To be as transparent as possible, accompanying the report will be the Scoring Methodology and the Guide for Scorers. We intend to update our methodology regularly according to feedback from stakeholders.
Seeking Harmony in AI Governance
This project is supported in part by the Media and Democracy Fund.
Group Members
-
Sasa Jovanovic
Policy Research Lead, Senior Privacy Program and Policy Analyst, Venable LLP -
Tuan Pham
Research Associate in Neuroscience
Sociotechnical Evaluation of LLMs
The goal of this effort is to develop methods for evaluating the performance of Large Language Models (LLMs), especially within sociotechnical contexts where the output of the LLM could be used to impact people’s lives.
Group Members
-
Victor Ojewale
Graduate Student in Computer Science -
Ellie Pavlick
Manning Assistant Professor of Computer Science, Assistant Professor of Linguistics -
Qinan Yu
Undergraduate Student
Genetic Data Governance
The goal of this effort is to map the landscape of uses, risks, and harms associated with genetic data, and make recommendations for the public and the policymakers on how to govern this sensitive data. The relevance of our work is underscored by the recent 23andMe data breach, which impacted nearly 7 million users, and the growing movement in state legislatures to integrate genetic data protections into privacy laws.
Direct-to-Consumer Genetic Testing: Data Flow, Governance, and Recommendations to Mitigate Harm | Undergraduate Thesis by Amit Levi
Group Members
-
Amit Levi
Former Undergraduate Student -
Sohini Ramachandran
Director of the Data Science Institute, Hermon C. Bumpus Professor of Biology and Data Science, Professor of Computer Science -
Vivek Ramanan
Graduate Student in Computational Molecular Biology -
Ria Vinod
Graduate Student in Computational Molecular Biology -
Cole Williams
Graduate Student in Computational Molecular Biology
Evaluating ML Models
We undertake research that seeks to build better evaluation for machine learning models, in theory and in practice.
- The Misuse of AUC: What High Impact Risk Assessment Gets Wrong | Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
- Don't Let the Math Distract You: Together, We Can Fight Algorithmic Injustice | ACLU
- To Pool or Not To Pool: Analyzing the Regularizing Effects of Group-Fair Training on Shared Models | Proc. AI and Statistics, 2024.
- Observing Context Improves Disparity Estimation when Race is Unobserved | AI, Ethics, and Society, 2024.
Group Members
-
Lizzie Kumar
Brown Computer Science PhD Graduate, Postdoc in Health Policy at Stanford University -
Kweku Kwegyir-Aggrey
Graduate Student in Computer Science
Technology Law and Policy
We undertake research on numerous questions at the intersection of technology, law, and policy. These include understanding the role of generative AI in copyright, examining how privacy-enhancing technologies might subvert data protection goals, exploring how the technical and legal discussions around data minimization are often at odds, and identifying points of commonality and difference between generative and predictive AI.
- Break It 'Til You Make It: An Exploration of the Ramifications of Copyright Liability Under a Pre-training Paradigm of AI Development
- You Still See Me: How Data Protection Supports the Architecture of ML Surveillance
- Compliance Cards: Computational Artifacts for Automated AI Regulation Compliance
- Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies
Group Members
-
Nasim Sonboli
Postdoctoral Research Associate in Data Science -
Rui-Jie Yew
Graduate Student in Computer Science
Socially Responsible Computing (SRC) Curriculum Handbook
The CS Department’s Socially Responsible Computing (SRC) program reimagines computer science education at Brown and beyond by exposing future engineers to the social impact of modern digital technology, ethical and political challenges surrounding such technologies, as well as technical and theoretical tools to address those challenges. The program develops curricula, pedagogical approaches and instructional materials to support the inclusion of SRC in a wide variety of computer science courses. The SRC curriculum currently covers 7 overarching areas:
- Data protection and privacy,
- Automated Decision-Making: Fairness & Justice, Transparency, Reliability,
- Communication and public discourse,
- Accessibility and universal design,
- Digital well-being,
- Sustainability, and
- Socio-political and economic context of technology
The SRC Curriculum Handbook represents a joint effort between the SRC program and CNTR to monitor and gather interdisciplinary and multi-stakeholder content on the rapidly changing landscape of socially responsible computing and to synthesize that content into educational, digestible primers and curated lists of resources.
The handbook will serve as a curriculum guide on socially responsible computing education within Brown’s Computer Science department, geared towards teaching assistants, faculty, and students. It also has the potential to grow into a sustainable, public resource for the Brown community and beyond.
Want to get involved? We will be recruiting new cohorts of students each semester to work on this project. The Fall ‘24 cohort application has closed. Learn about paid and credit-bearing positions for Spring '25 here.
Faculty Advisors
-
Suresh Venkatasubramanian
Director of the Center for Technological Responsibility, Reimagination, and Redesign, Interim Director of the Data Science Institute, Professor of Data Science and Computer Science -
Julia Netter
Assistant Professor of the Practice of Philosophy and Computer Science, Socially Responsible Computing Program Coordinator
Team Members
-
Michelle Ding
Socially Responsible Computing Curriculum Handbook Project Director, Undergraduate Student in Computer Science -
Doren Hsiao-Wecksler
Privacy and Data Protection Project Lead, Head Socially Responsible Computing Teaching Assistant, Undergraduate Student in Computer Science -
Connor Flick
Privacy and Data Protection Research Team, Undergraduate Student in Computational Biology -
Peyton Luiz
Privacy and Data Protection Research Team, MPH Student in Epidemiology -
Ethan Schenker
Privacy and Data Protection Research Team, Undergraduate Student in International and Public Affairs & Economics -
Huda Abdulrasool
Infrastructure and Tool Development Team, Undergraduate Student in Computer Science -
Christina Peng
Infrastructure and Tool Development Team, Undergraduate Student in Computer Science & English -
Aanya Hudda
Infrastructure and Tool Development Team, Undergraduate Student in Computer Science & Behavioral Decision Sciences -
Jasmine Kamara
Infrastructure and Tool Development Team, Undergraduate Student in Computer Science & Science, Technology and Society
Past Project: Trust Infrastructure for Grassroots Organizing (TIGRO)
Problem: Grassroots organizers work in the digital and physical world, and use social media extensively for networking and organizing. This exposes them to surveillance and disinformation campaigns, and that leads to physical violence and incarceration.
Goal: To build cryptographic tools for grassroots organizing so that organizers can connect and trust each other without revealing info to those surveilling them from outside their community.
Status: Cryptographic protocols have been developed: Looking for collaborators to help implement the protocols in mobile apps.
Group Members
-
Seny Kamara
Visiting Associate Professor of Computer Science -
Leah Rosenbloom
Brown Computer Science PhD Graduate, Postdoc at Northeastern University -
John Wilkinson
Undergraduate Student
Past Project: Precarity
This research is an interdisciplinary study of financial instability and inequity when subject to the uncertainty of automated decisions. We do this via the modeling of artificial societies - which are computer simulations or computational models designed to imitate and investigate the behavior of complex social systems.
Financial instability is a condition of the modern world. To study financial instability, we draw on the concept of “precarity.” Precarity is a term that characterizes the latent instability - precariousness - and therefore vulnerability of people's lives. It is also the manifestation of the phenomenon where one bad decision has ripple effects by directly (or indirectly) affecting future decisions. For example, repeated credit card denials result in denial of mortgage loan applications and exacerbation of other components of an individual's financial wellbeing. Our goal is to build realistic agent-based modeling frameworks that capture the behavior of individuals and the resultant ML tools' precarity.
Group Members
-
Pegah Nohkiz
Brown Computer Science PhD Graduate, Postdoc at Cornell University