On day three of COLM, Nicholas Carlini from Anthropic delivered a talk called “Are the harms and risks of LLMs worth it?” Carlini spoke about a range of AI harms from “immediate harms that we are already facing today (e.g. sycophancy), risks that seem inevitable to arrive in short order (e.g. LLM aided abuse), and ‘existential risks’ some argue are likely to arrive soon after that (e.g. Doom?)” He mapped the different “camps” of AI safety research–using the examples of “AI 2027,” “Everyone Dies,” “AI Con,” “AI Snake Oil,” and “Empire of AI”–and argued that “progress mitigating any one of these risks contributes positively towards mitigating the others.” Carlini then ends with the quote: “What problems you're scared of depend on how good you think the LLMs will get. Please be willing to change your mind. This is COLM. We made the models, it's our job to fix it. How are you going to change your research agenda?”
While Carlini's call for collaboration and mutual understanding between researchers focused on “short-term” socio-technical issues and researchers focused on “long-term” existential risks in a conference of researchers belonging to both communities has a nice sentiment, it falsely assumes that the two spaces are equally funded and supported, especially in the current U.S. political climate. This is the case of equality versus equity: the baseline is already uneven between the two spaces of research. The latter AI safety movement (including efforts by Anthropic) is funded by billionaires who benefit from the continued development of LLMs under the mask of responsible development. Anthropic's “Thoughts on America’s AI Action Plan” states that “the alignment between many of our recommendations and the AI Action Plan demonstrates a shared understanding of AI's transformative potential” and that Anthropic “look[s] forward to working with the Administration to implement these initiatives while ensuring appropriate attention to catastrophic risks…” Generally, it is clear that the AI safety community focusing on what Carlini describes as “existential risks” is well supported both politically and financially.