Endorsements from researchers, founders, and technical leaders
“I find it hard to imagine a future that goes well where AI doesn't value all sentient life. I think this is a great first crack at the problem and something I think needs to be tried.”
— Marcus Abramovitch
“I believe influencing the moral circle of AIs as they become more and more powerful is maybe one of the highest leverage opportunities in farm animal welfare. Whether you believe in AGI take-off or not, it seems clear to me that AIs will be responsible for a huge number of decisions. I think it’s crucial for them to include non-human sentient beings into their decision making. I’m saying this as someone who has dedicated the last eight years of their career and is still doing corporate cage-free, broiler and shrimp campaigns. I would want to see a lot more advocacy efforts like these directed at AI labs.”
— Jonas Becker, Head of Invertebrate Welfare at the International Council for Animal Welfare
“Locking in uncompassionate values in AGI/superintelligence is among the largest worst-case risks from AI... CaML is the first to propose instruction pre-training to prevent such risks and my team's research suggests this is a very promising way to overcome the resistance of LLMs to alignment tuning. CaML has demonstrated that they are well capable of directing and executing these research and engineering efforts.”
— Technical staff member at a frontier lab
“What CaML is doing stands out to me as one of the few efforts that goes beyond surface-level fine-tuning and tries to build compassion into models at the pretraining level. Their technical approach is both novel and practical... In a space where many alignment ideas stay theoretical, CaML is shipping tools and data that labs could adopt.”
— Aditya Raj, CEO of AI Safety India
“Compassion Aligned Machine Learning and AI for Animals are developing a benchmark for testing how compassionate Claude is to all sentient beings... The entire CaML team have been thoughtful and collaborative in this process. Their experience with synthetic data is uncommon and positions them well to usefully advise labs on welfare matters.”
— Staff member at a frontier lab
“This project fills an important gap and has the potential for extraordinary impact. What makes CaML stand out is their unique combination of technical know-how, the ability to get things done, and deep concern for all sentient beings.”
— Tobias Baumann, co-founder of the Center for Reducing Suffering
“The team at CaML is conducting crucial work to ensure that the future goes well for all sentient beings... they are uniquely positioned to implement the interventions they are pursuing due to their technical expertise.”
— Adria Moret, Philosophy Researcher at the University of Barcelona
“I've been working with CaML while wiring the AHA animal-harm benchmark into the Inspect-Evals stack, and they impressed me immediately: they care deeply about non-human welfare. They've turned a rough sketch into a functioning eval... they have been attentive to technical and substantive details.”
— Nishad Singh, head of the Animal Harms Assessment benchmark
“I'm excited about the work that you're doing to increase the chances that nonhumans are sufficiently morally considered by AIs, and am grateful that you've been willing to explore and pursue an unusual approach to making progress on this.”
— Jamie Harris
“This is really impressive stuff, how are you guys not a major research organization yet?”
— Anonymous Meta AI researcher
“I am so grateful for what you are doing, this has the potential to reshape AI research. Really hope this gets the attention it deserves.”
— Anonymous Google DeepMind researcher
“This is one of the best ethics benchmarks I have seen out there. This is really impressive work.”
— Anonymous METR researcher