Data Science Institute
Center for Technological Responsibility, Reimagination and Redesign

We Can’t Lose Focus! CNTR Researchers Reflect on the 2025 Conference on Language Modeling

The Conference on Language Modeling (COLM), held from October 7–10th in Montreal, Canada, is an academic venue focused on studying, improving, and critiquing the field of language modelling. This year, COLM brought together over a thousand researchers from around the world. We'll be sharing reflections by CNTR researchers who attended COLM.

At the Palais des Congrès, during a talk I attended at COLM, a presenter asked a room filled with researchers, "Raise your hand if you think artificial intelligence will not replace all the important economic aspects of society." Surprisingly, I had the only hand raised, and the rest of the presentation continued as follows: the presenter told me that the talk probably wasn’t for me, I stayed anyway, and I listened intently to a discussion that called for more research on how to govern the post-AGI future

Truly, I am very grateful to have gone to Canada. I met brilliant researchers who were lovely, inspiring, and truly passionate about their work. Yet, at the same time, I could not help but feel an occasional tension within me that still exists today; while it is a blessing to have the chance to ponder theories on how to mitigate future LLM harms, it sometimes comes at the cost of ignoring the present. While we focus on solving the alignment problem, police departments across the United States are still using biased language models in the criminal justice system. While we discuss ways on how to red-team language models, ICE continues to build out its AI use case apparatus to streamline its brutal immigration enforcement agenda. While listening to talks about how we can prevent future “AI politicians” from deciding that humans are dispensible, LLM chatbots are leading children to commit suicide, the US government is violating the privacy of millions of people by centralizing their data for its "AI-first strategy," and people using AI nudification apps continue to exploit victims everywhere. Amidst all the AI safety, AGI, and existential risk research, we have to ask ourselves: what good does it do to align, red-team, and stress test models to try and prevent the problems of tomorrow, while continuing to set aside the issues of today?

“ What good does it do to align, red-team, and stress test models to try and prevent the problems of tomorrow, while continuing to set aside the issues of today? ”

It is a difficult question to reckon with, but it highlights that we need more sociotechnical research that centers on affected communities and their present-day harms. And, more importantly, we need sociotechnical research to no longer exist in a vacuum away from policy: to create the world we so desperately want, we sociotechnical researchers need to demand for enforceable laws, regulations, and policies that will prevent the harms that our research finds. We cannot keep living in a world where AI-companies claim to be pursuing responsible technology ideals while lobbying against any regulatory oversight. We cannot keep living in a world where the business models of LLMs are allowed to freely depend on maximum data extractionenvironmental harm, and reckless, widespread deployment. We cannot keep living in a world where there are no federal laws that force the self-proclaimed AI safety companies to actually create the technologies that they claim protect people. This misaligned world between research and policy isn’t what we need to settle for. 

You don’t need to have the ear of a policymaker or staffer to start this work (though, if you do, that is great). Start a blog that highlights where modern day laws fall short in protecting affected people in your research area. Meet people outside of your bubble and understand where current AI systems are not working for them. Build coalitions with your local non-profits, civil society groups, activists or journalists to educate and mobilize the public about current AI harms. Call your representative, attend a town hall, organize a protest, and speak about your work. Research needs to go beyond the publications, conferences, and reading groups, and actually help people through public action. 

More sociotechnical research and enforceable laws means that we won’t need to surrender ourselves to an unjust AI-present and even more inequitable “post-AGI” future. COLM showed me that we, as researchers, need to be loud, be organized, and remember that we can’t lose focus from the world right in front of us. 

Read the CNTR opinion abstract: “Testing LLMs in a sandbox isn’t responsible. Focusing on community use and needs is.” by Michelle L. Ding, Jo Kavishe, Victor Ojewale, and Suresh Venkatasubramanian and their crowdsourced resources for community-driven LLM evaluations.