vincentweisser avatar
Vincent Weisser

@vincentweisser

focused on alignment

vincentweisser.com
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Making 52 AI Alignment Video Explainers and Podcasts
$50
AI Safety Research Organization Incubator - Pilot Program
$200
AI Safety Research Organization Incubator - Pilot Program
$277
AI Safety Research Organization Incubator - Pilot Program
$500
Scaling Training Process Transparency
$150
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
Cadenza Labs: AI Safety research group working on own interpretability agenda
$10
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
Cadenza Labs: AI Safety research group working on own interpretability agenda
$790
Cadenza Labs: AI Safety research group working on own interpretability agenda
$1000
Cadenza Labs: AI Safety research group working on own interpretability agenda
$210
Cadenza Labs: AI Safety research group working on own interpretability agenda
$500
Exploring novel research directions in prosaic AI alignment
$200
MATS Funding
$300
MATS Funding
$500
Empirical research into AI consciousness and moral patienthood
$50
Empirical research into AI consciousness and moral patienthood
$70
Run five international hackathons on AI safety research
$100
Avoiding Incentives for Performative Prediction in AI
$50
AI Alignment Research Lab for Africa
$150
AI Alignment Research Lab for Africa
$100
AI Alignment Research Lab for Africa
$150
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
Avoiding Incentives for Performative Prediction in AI
$100
Discovering latent goals (mechanistic interpretability PhD salary)
$150
Introductory resources for Singular Learning Theory
$50
Holly Elmore organizing people for a frontier AI moratorium
$100
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor
$50
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$150
Activation vector steering with BCI
$150
Avoiding Incentives for Performative Prediction in AI
$50
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$70
Alignment Is Hard
$70
Introductory resources for Singular Learning Theory
$70
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
Compute and other expenses for LLM alignment research
$100
Optimizing clinical Metagenomics and Far-UVC implementation.
$100
Apollo Research: Scale up interpretability & behavioral model evals research
$160
Apollo Research: Scale up interpretability & behavioral model evals research
$250
Run five international hackathons on AI safety research
$250
Holly Elmore organizing people for a frontier AI moratorium
$100
Discovering latent goals (mechanistic interpretability PhD salary)
$400
Discovering latent goals (mechanistic interpretability PhD salary)
$40
Scoping Developmental Interpretability
$45
Scoping Developmental Interpretability
$1000
Scoping Developmental Interpretability
$455
Joseph Bloom - Independent AI Safety Research
$250
Joseph Bloom - Independent AI Safety Research
$100
Joseph Bloom - Independent AI Safety Research
$50
Agency and (Dis)Empowerment
$250
Isaak Freeman
$100
Medical Expenses for CHAI PhD Student
$43
Long-Term Future Fund
$50

Comments

vincentweisser avatar

Vincent Weisser

3 months ago

Awesome work! One of the most exciting areas of alignment in my view!

vincentweisser avatar

Vincent Weisser

3 months ago

Very excited about this effort, think it could have great impact, and personally know Kay and think he has a good chance to deliver on this with his team!

vincentweisser avatar

Vincent Weisser

5 months ago

glad to hear and awesome to see this initiative!

vincentweisser avatar

Vincent Weisser

6 months ago

Might be worth keeping it open for more donations if requested?