Endorsements from researchers, founders, and technical leaders
“I find it hard to imagine a future that goes well where AI doesn't value all sentient life. I think this is a great first crack at the problem and something I think needs to be tried.”
— Marcus Abramovitch
“Locking in uncompassionate values in AGI/superintelligence is among the largest worst-case risks from AI... CaML is the first to propose instruction pre-training to prevent such risks and my team's research suggests this is a very promising way to overcome the resistance of LLMs to alignment tuning. CaML has demonstrated that they are well capable of directing and executing these research and engineering efforts.”
— Technical staff member at a frontier lab
“What CaML is doing stands out to me as one of the few efforts that goes beyond surface-level fine-tuning and tries to build compassion into models at the pretraining level. Their technical approach is both novel and practical... In a space where many alignment ideas stay theoretical, CaML is shipping tools and data that labs could adopt.”
— Aditya Raj, CEO of AI Safety India
“Compassion in Machine Learning and AI for Animals are developing a benchmark for testing how compassionate Claude is to animals... The entire CaML team have been thoughtful and collaborative in this process. Their experience with synthetic data is uncommon and positions them well to usefully advise labs on welfare matters.”
— Staff member at a frontier lab
“This is likely to be valuable if it works, and to be adopted by at least some labs if we can demonstrate that it works.”
— Technical staff member at a frontier lab
“This project fills an important gap and has the potential for extraordinary impact. What makes CaML stand out is their unique combination of technical know-how, the ability to get things done, and deep concern for all sentient beings.”
— Tobias Baumann, co-founder of the Center for Reducing Suffering
“The team at CaML is conducting crucial work to ensure that the future goes well for all sentient beings... they are uniquely positioned to implement the interventions they are pursuing due to their technical expertise.”
— Adria Moret, Philosophy Researcher at the University of Barcelona
“I've been working with CaML while wiring the AHA animal-harm benchmark into the Inspect-Evals stack, and they impressed me immediately: they care deeply about non-human welfare. They've turned a rough sketch into a functioning eval... they have been attentive to technical and substantive details.”
— Nishad Singh, head of the Animal Harms Assessment benchmark
“I think this can be really impactful ... if you pull it off, you'll possibly have made a massive difference to our future lightcone”
— Soroush J. Pour, founder of Harmony Intelligence
“This project has the potential to have a significant and beneficial impact, and I would be excited to see what your work will lead to.”
— Irina Gueorguiev, AI market researcher and advisor at Successif