The latest cover story from Conduit, the Brown CS annual magazine, is a close look at Brown's new Center for Technological Responsibility, Reimagination and Redesign (CNTR), whose mission is to redefine computer science education, research, and technology to center the needs, problems, and aspirations of all, especially those that technology has left behind.
This December’s Conduit issue, published annually by Brown’s Department of Computer Science, highlights the Center for Technological Responsibility, Re-imagination, and Redesign (CNTR)’s faculty and student research that recenters technology around human needs.
Second year PhD student Rui-Jie Yew was recently recognized as runner-up for Best Student Paper at the Artificial Intelligence, Ethics, and Society (AIES) Conference in San Jose at the end of October.
In the master of science degree program, students learn how to use data responsibly, giving local and global learners valuable training in responsible AI development and implementation.
The Paragon Policy Fellowship, co-led by Brown senior Jenn Wang and advised by CNTR Director Suresh Venkatasubramanian, connects students to local governments to work on tech policy issues and plans to develop a playbook for building lasting talent pipelines.
Diana Freed joins Brown CS and Brown’s Data Science Institute as an assistant professor. Diana is involved in an emerging area of computer science focused on building and designing technologies specifically to improve online safety and well-being for vulnerable and marginalized populations globally.
Get to know Suresh Venkatasubramanian, Deputy Director of the Data Science Institute, Professor of Data Science and Computer Science, and Director of the Center for Technological Responsibility, Reimagination, and Redesign.
On Thursday, March 28, 2024, the White House Office of Management and Budget released a memo for heads of federal agencies outlining additional guidance for the use of artificial intelligence (AI), supplementing Executive Order 14110 on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence from October of last year. The guidance is noteworthy as it signifies the first binding rules for the federal government’s use of AI, reinforcing a commitment to AI governance that protects the rights and safety of the public, including further details on AI risk management, safeguards in AI procurement, and public inventories of AI code and data. While details still remain to be ironed out regarding how agencies will implement the guidance, Vice-President Kamala Harris applauded the memo in her remarks, stating that the Biden administration hopes to leverage these domestic policies internationally to serve as a model for global action.
The New York Department of Financial Services (NYDFS) is the agency responsible for regulating financial services and products sold in New York and enforcing the New York State laws which apply to their providers. The agency recently issued a proposed circular letter regarding the use of artificial intelligence and consumer data in insurance and has requested public feedback on its contents. It builds on a previous, much vaguer circular letter issued in 2019, which pertains only to life insurance providers.
The inaugural discussion in a series convened by Brown’s Office of the Provost and Data Science Institute detailed the history of artificial intelligence and new questions generative AI is raising.
Speaking before a U.S. Senate committee on the risks and opportunities of artificial intelligence, computer scientist Suresh Venkatasubramanian urged lawmakers to establish regulations to govern AI-based systems.
Life insurance is one of the oldest and most carefully regulated industries in America, and it is one of many in the midst of upheaval due to “big data” and advances in machine learning. These changes have sparked concerns about algorithmic discrimination, and rightly so, considering the industry’s sordid history.