Data Science Institute
Center for Technological Responsibility, Reimagination and Redesign

Imagining Dystopias: CNTR Researchers Reflect on the 2025 Conference on Language Modeling

The Conference on Language Modeling (COLM), held from October 7–10th in Montreal, Canada, is an academic venue focused on studying, improving, and critiquing the field of language modelling. This year, COLM brought together over a thousand researchers from around the world. We'll be sharing reflections by CNTR researchers who attended COLM.

On day three of COLM, Nicholas Carlini from Anthropic delivered a talk called “Are the harms and risks of LLMs worth it?” Carlini spoke about a range of AI harms from “immediate harms that we are already facing today (e.g. sycophancy), risks that seem inevitable to arrive in short order (e.g. LLM aided abuse), and ‘existential risks’ some argue are likely to arrive soon after that (e.g. Doom?)” He mapped the different “camps” of AI safety research–using the examples of “AI 2027,” “Everyone Dies,” “AI Con,” “AI Snake Oil,” and “Empire of AI”–and argued that “progress mitigating any one of these risks contributes positively towards mitigating the others.” Carlini then ends with the quote: “What problems you're scared of depend on how good you think the LLMs will get. Please be willing to change your mind. This is COLM. We made the models, it's our job to fix it. How are you going to change your research agenda?”

While Carlini's call for collaboration and mutual understanding between researchers focused on “short-term” socio-technical issues and researchers focused on “long-term” existential risks in a conference of researchers belonging to both communities has a nice sentiment, it falsely assumes that the two spaces are equally funded and supported, especially in the current U.S. political climate. This is the case of equality versus equity: the baseline is already uneven between the two spaces of research. The latter AI safety movement (including efforts by Anthropic) is funded by billionaires who benefit from the continued development of LLMs under the mask of responsible development. Anthropic's “Thoughts on America’s AI Action Plan” states that “the alignment between many of our recommendations and the AI Action Plan demonstrates a shared understanding of AI's transformative potential” and that Anthropic “look[s] forward to working with the Administration to implement these initiatives while ensuring appropriate attention to catastrophic risks…” Generally, it is clear that the AI safety community focusing on what Carlini describes as “existential risks” is well supported both politically and financially. 

“ Simply working on “problems you're scared of” is not enough. People who have to imagine dystopias are not living in them. ”

Meanwhile, the current administration's massive funding cuts disproportionately impact research institutions and civil society organizations where socio-technical research is largely being conducted. It is promising that in October 2025, a coalition of ten “philanthropic leaders” (including the Ford Foundation, MacArthur Foundation, Mozilla Foundation, and Siegel Family Endowment) launched Humanity AI, a “$500 million five-year initiative dedicated to making sure people have a stake in the future of artificial intelligence (AI).” This is a strong push towards providing more financial support that will hopefully encourage researchers to undertake true participatory and community-driven research and move them away from the seemingly easier and cost efficient methods of simply using LLMs to 1) evaluate other LLMs and 2) replace human participants in scientific studies.

Simply working on “problems you're scared of” is not enough. People who have to imagine dystopias are not living in them. Any researcher claiming to tackle “responsible AI”, “trustworthy AI”, or any other version of the term should be pursuing socio-technical research that aims to serve communities who are presently living in dystopias brought about by AI technologies.

Read the CNTR opinion abstract: “Testing LLMs in a sandbox isn’t responsible. Focusing on community use and needs is.” by Michelle L. Ding, Jo Kavishe, Victor Ojewale, and Suresh Venkatasubramanian and their crowdsourced resources for community-driven LLM evaluations. 

For more of Michelle’s reflections on COLM, read “Who has the luxury to think? Researchers are responsible for more than just papers.”