Open Paws seeks to ensure future AI systems are free of speciesist biases that can harm animals and contribute to broader existential risks for humanity. This project will develop machine learning models and benchmarks that explicitly address these biases, laying the groundwork for future AI systems to respect all sentient beings. We will provide tools, data, and educational materials to empower AI developers and organizations in removing harmful biases, fostering safer AI practices.
Goal 1: Develop tools and methodologies to detect and reduce speciesist bias.
Approach: Collect human feedback data from over 1,000 volunteers to assess the level of bias in existing AI models. Use this data to develop robust benchmarks and alignment techniques to train AI models free from speciesist bias.
Goal 2: Share resources and data with major AI labs and organizations to eliminate biases.
Approach: Create datasets, tools, and technical resources that will be shared openly. Conduct advocacy campaigns to encourage major AI labs to adopt these tools.
Goal 3: Support animal protection nonprofits in implementing bias-free AI.
Approach: Provide technical training and support for animal protection organizations to incorporate AI into their workflows safely and effectively.
Data Collection ($45K): Organize and manage volunteers to collect feedback data.
Corporate Advocacy ($25K - $150K): Advocacy campaigns for AI labs to adopt bias-free models.
Initial AI Training ($50K - $750K): Train AI models using collected data and human feedback.
Implementation ($3K - $7K per organization): Fine-tune models for specific nonprofits and provide technical support.
Sam Tucker (Executive Director): With 15+ years in animal advocacy and AI development, Sam brings expertise in ethical AI applications and organizational leadership.
Ahn Howell (CTO): An experienced ML engineer with a strong background in technology and society research.
Maddie Davis (Head of Communications): Over five years in animal protection and communication expertise.
Eceo Brickle (Web & Security Engineer): Brings four years of cybersecurity and web development experience.
The team has successfully executed impactful projects, including conducting comprehensive AI research, gaining acceptance into Google's AI for Startups program, and attracting over 75 volunteers shortly after launch.
Our theory of change relies on the following assumptions. For each assumption we have evaluated the likely level of risk that this assumption is incorrect, our plans to mitigate it and the likely outcome if we are unable to mitigate it.
Assumption: Quality data for training AI in animal advocacy is attainable. Risk Evaluation: Very Low Risk Risk Mitigation: 6 months of data/feedback collection
Potential Negative Outcome: Inability to obtain sufficient high-quality, relevant training data could severely limit the AI's capabilities and accuracy for animal advocacy use cases.
Assumption: AI effectiveness increases with niche-specific data.
Risk Evaluation: Medium Risk Risk Mitigation: Start small, scale up iteratively with open-source data
Potential Negative Outcome: If niche data provides little benefit, the AI may perform no better than generic models, failing to advance animal advocacy applications meaningfully.
Assumption: Free training and support increases AI adoption by animal rights groups. Risk Evaluation: Low to Medium Risk
Risk Mitigation: User research, user-friendly tools/interfaces
Potential Negative Outcome: Low adoption by key organizations due to usability issues, privacy concerns, lack of customization options etc. could limit the AI's real-world impact.
Assumption: AI labs can be influenced to prioritize animal interests. Risk Evaluation: Medium Risk Risk Mitigation: Start with achievable asks, demonstrate open-source value
Potential Negative Outcome: For-profit AI labs prioritizing business goals over social impact could lead to models exhibiting bias or making unethical recommendations regarding animal welfare.
Assumption: Open data/benchmarks make building animal-aligned AI easier. Risk Evaluation: Low Risk Risk Mitigation: Adapt existing anti-bias benchmarks
Potential Negative Outcome: If open resources prove insufficient for eliminating speciesism in AI, costly custom data/evaluations may still be required, slowing progress.
We have secured $25K funding from Stray Dog Institute and $350K worth of access to free cloud computing credits.