vincentweisser avatar
Vincent Weisser

@vincentweisser

focused on alignment

vincentweisser.com
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Making 52 AI Alignment Video Explainers and Podcasts
$50
AI Safety Research Organization Incubator - Pilot Program
$200
AI Safety Research Organization Incubator - Pilot Program
$277
AI Safety Research Organization Incubator - Pilot Program
$500
Scaling Training Process Transparency
$150
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
Cadenza Labs: AI Safety research group working on own interpretability agenda
$10
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
Cadenza Labs: AI Safety research group working on own interpretability agenda
$790
Cadenza Labs: AI Safety research group working on own interpretability agenda
$1000
Cadenza Labs: AI Safety research group working on own interpretability agenda
$210
Cadenza Labs: AI Safety research group working on own interpretability agenda
$500
Exploring novel research directions in prosaic AI alignment
$200
MATS Program
$300
MATS Program
$500
Empirical research into AI consciousness and moral patienthood
$50
Empirical research into AI consciousness and moral patienthood
$70
Run five international hackathons on AI safety research
$100
Avoiding Incentives for Performative Prediction in AI
$50
AI Alignment Research Lab for Africa
$150
AI Alignment Research Lab for Africa
$100
AI Alignment Research Lab for Africa
$150
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
Avoiding Incentives for Performative Prediction in AI
$100
Discovering latent goals (mechanistic interpretability PhD salary)
$150
Introductory resources for Singular Learning Theory
$50
Holly Elmore organizing people for a frontier AI moratorium
$100
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor
$50
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$150
Activation vector steering with BCI
$150
Avoiding Incentives for Performative Prediction in AI
$50
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$70
Alignment Is Hard
$70
Introductory resources for Singular Learning Theory
$70
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
Compute and other expenses for LLM alignment research
$100
Optimizing clinical Metagenomics and Far-UVC implementation.
$100
Apollo Research: Scale up interpretability & behavioral model evals research
$160
Apollo Research: Scale up interpretability & behavioral model evals research
$250
Run five international hackathons on AI safety research
$250
Holly Elmore organizing people for a frontier AI moratorium
$100
Discovering latent goals (mechanistic interpretability PhD salary)
$400
Discovering latent goals (mechanistic interpretability PhD salary)
$40
Scoping Developmental Interpretability
$45
Scoping Developmental Interpretability
$1000
Scoping Developmental Interpretability
$455
Joseph Bloom - Independent AI Safety Research
$250
Joseph Bloom - Independent AI Safety Research
$100
Joseph Bloom - Independent AI Safety Research
$50
Agency and (Dis)Empowerment
$250
Isaak Freeman
$100
Medical Expenses for CHAI PhD Student
$43
Long-Term Future Fund
$50

Comments

vincentweisser avatar

Vincent Weisser

about 1 year ago

Awesome work! One of the most exciting areas of alignment in my view!

vincentweisser avatar

Vincent Weisser

about 1 year ago

Very excited about this effort, think it could have great impact, and personally know Kay and think he has a good chance to deliver on this with his team!

vincentweisser avatar

Vincent Weisser

over 1 year ago

glad to hear and awesome to see this initiative!

vincentweisser avatar

Vincent Weisser

over 1 year ago

Might be worth keeping it open for more donations if requested?

Transactions

ForDateTypeAmount
Making 52 AI Alignment Video Explainers and Podcasts11 months agoproject donation50
AI Safety Research Organization Incubator - Pilot Program12 months agoproject donation200
AI Safety Research Organization Incubator - Pilot Program12 months agoproject donation277
AI Safety Research Organization Incubator - Pilot Program12 months agoproject donation500
Scaling Training Process Transparencyabout 1 year agoproject donation150
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation10
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation790
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation1000
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation210
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation500
Manifund Bankabout 1 year agodeposit+500
Manifund Bankabout 1 year agodeposit+500
Manifund Bankabout 1 year agodeposit+1000
Manifund Bankabout 1 year agodeposit+1000
Manifund Bankabout 1 year agodeposit+300
Exploring novel research directions in prosaic AI alignmentabout 1 year agoproject donation200
Manifund Bankabout 1 year agodeposit+200
Manifund Bankabout 1 year agomana deposit+10
MATS Programabout 1 year agoproject donation300
MATS Programabout 1 year agoproject donation500
Manifund Bankabout 1 year agodeposit+500
Manifund Bankabout 1 year agodeposit+300
Empirical research into AI consciousness and moral patienthoodover 1 year agoproject donation50
Empirical research into AI consciousness and moral patienthoodover 1 year agoproject donation70
Run five international hackathons on AI safety researchover 1 year agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+200
AI Alignment Research Lab for Africaover 1 year agoproject donation150
AI Alignment Research Lab for Africaover 1 year agoproject donation100
AI Alignment Research Lab for Africaover 1 year agoproject donation150
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)over 1 year agoproject donation150
Manifund Bankover 1 year agodeposit+500
Introductory resources for Singular Learning Theoryover 1 year agoproject donation50
Holly Elmore organizing people for a frontier AI moratoriumover 1 year agoproject donation100
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor over 1 year agoproject donation50
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation150
Activation vector steering with BCIover 1 year agoproject donation150
Manifund Bankover 1 year agodeposit+500
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation70
Alignment Is Hardover 1 year agoproject donation70
Introductory resources for Singular Learning Theoryover 1 year agoproject donation70
Manifund Bankover 1 year agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation100
Compute and other expenses for LLM alignment researchover 1 year agoproject donation100
Optimizing clinical Metagenomics and Far-UVC implementation.over 1 year agoproject donation100
Apollo Research: Scale up interpretability & behavioral model evals researchover 1 year agoproject donation160
Apollo Research: Scale up interpretability & behavioral model evals researchover 1 year agoproject donation250
Run five international hackathons on AI safety researchover 1 year agoproject donation250
Holly Elmore organizing people for a frontier AI moratoriumover 1 year agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)over 1 year agoproject donation400
Discovering latent goals (mechanistic interpretability PhD salary)over 1 year agoproject donation40
Scoping Developmental Interpretabilityover 1 year agoproject donation45
Scoping Developmental Interpretabilityover 1 year agoproject donation1000
Scoping Developmental Interpretabilityover 1 year agoproject donation455
Joseph Bloom - Independent AI Safety Researchover 1 year agoproject donation250
Joseph Bloom - Independent AI Safety Researchover 1 year agoproject donation100
Joseph Bloom - Independent AI Safety Researchover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+1000
Agency and (Dis)Empowermentover 1 year agoproject donation250
Manifund Bankover 1 year agodeposit+2000
<e083e3b0-a131-4eaa-8a83-6a146a196432>over 1 year agoprofile donation100
Medical Expenses for CHAI PhD Studentover 1 year agoproject donation43
<03fac9ff-2eaf-46f3-b556-69bdee303a1f>over 1 year agoprofile donation50
Manifund Bankover 1 year agodeposit+900
Manifund Bankover 1 year agodeposit+100