At the Palais des Congrès, during a talk I attended at COLM, a presenter asked a room filled with researchers, "Raise your hand if you think artificial intelligence will not replace all the important economic aspects of society." Surprisingly, I had the only hand raised, and the rest of the presentation continued as follows: the presenter told me that the talk probably wasn’t for me, I stayed anyway, and I listened intently to a discussion that called for more research on how to govern the post-AGI future.
Truly, I am very grateful to have gone to Canada. I met brilliant researchers who were lovely, inspiring, and truly passionate about their work. Yet, at the same time, I could not help but feel an occasional tension within me that still exists today; while it is a blessing to have the chance to ponder theories on how to mitigate future LLM harms, it sometimes comes at the cost of ignoring the present. While we focus on solving the alignment problem, police departments across the United States are still using biased language models in the criminal justice system. While we discuss ways on how to red-team language models, ICE continues to build out its AI use case apparatus to streamline its brutal immigration enforcement agenda. While listening to talks about how we can prevent future “AI politicians” from deciding that humans are dispensible, LLM chatbots are leading children to commit suicide, the US government is violating the privacy of millions of people by centralizing their data for its "AI-first strategy," and people using AI nudification apps continue to exploit victims everywhere. Amidst all the AI safety, AGI, and existential risk research, we have to ask ourselves: what good does it do to align, red-team, and stress test models to try and prevent the problems of tomorrow, while continuing to set aside the issues of today?