Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
vincentweisser avatarvincentweisser avatar
Vincent Weisser

@vincentweisser

focused on alignment

vincentweisser.com
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Making 52 AI Alignment Video Explainers and Podcasts
$50
11 months ago
AI Safety Research Organization Incubator - Pilot Program
$200
12 months ago
AI Safety Research Organization Incubator - Pilot Program
$277
12 months ago
AI Safety Research Organization Incubator - Pilot Program
$500
12 months ago
Scaling Training Process Transparency
$150
about 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
about 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$10
about 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
about 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$790
about 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$1000
about 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$210
about 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$500
about 1 year ago
Exploring novel research directions in prosaic AI alignment
$200
about 1 year ago
MATS Program
$300
about 1 year ago
MATS Program
$500
about 1 year ago
Empirical research into AI consciousness and moral patienthood
$50
over 1 year ago
Empirical research into AI consciousness and moral patienthood
$70
over 1 year ago
Run five international hackathons on AI safety research
$100
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$50
over 1 year ago
AI Alignment Research Lab for Africa
$150
over 1 year ago
AI Alignment Research Lab for Africa
$100
over 1 year ago
AI Alignment Research Lab for Africa
$150
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$100
over 1 year ago
Discovering latent goals (mechanistic interpretability PhD salary)
$150
over 1 year ago
Introductory resources for Singular Learning Theory
$50
over 1 year ago
Holly Elmore organizing people for a frontier AI moratorium
$100
over 1 year ago
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor
$50
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$150
over 1 year ago
Activation vector steering with BCI
$150
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$50
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$70
over 1 year ago
Alignment Is Hard
$70
over 1 year ago
Introductory resources for Singular Learning Theory
$70
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
over 1 year ago
Compute and other expenses for LLM alignment research
$100
over 1 year ago
Optimizing clinical Metagenomics and Far-UVC implementation.
$100
over 1 year ago
Apollo Research: Scale up interpretability & behavioral model evals research
$160
over 1 year ago
Apollo Research: Scale up interpretability & behavioral model evals research
$250
over 1 year ago
Run five international hackathons on AI safety research
$250
over 1 year ago
Holly Elmore organizing people for a frontier AI moratorium
$100
over 1 year ago
Discovering latent goals (mechanistic interpretability PhD salary)
$400
over 1 year ago
Discovering latent goals (mechanistic interpretability PhD salary)
$40
over 1 year ago
Scoping Developmental Interpretability
$45
over 1 year ago
Scoping Developmental Interpretability
$1000
over 1 year ago
Scoping Developmental Interpretability
$455
over 1 year ago
Joseph Bloom - Independent AI Safety Research
$250
over 1 year ago
Joseph Bloom - Independent AI Safety Research
$100
over 1 year ago
Joseph Bloom - Independent AI Safety Research
$50
over 1 year ago
Agency and (Dis)Empowerment
$250
over 1 year ago
Isaak Freeman
$100
over 1 year ago
Medical Expenses for CHAI PhD Student
$43
over 1 year ago
Long-Term Future Fund
$50
over 1 year ago

Comments

Cadenza Labs: AI Safety research group working on own interpretability agenda
vincentweisser avatar

Vincent Weisser

about 1 year ago

Awesome work! One of the most exciting areas of alignment in my view!

AI Safety Research Organization Incubator - Pilot Program
vincentweisser avatar

Vincent Weisser

about 1 year ago

Very excited about this effort, think it could have great impact, and personally know Kay and think he has a good chance to deliver on this with his team!

Empowering AI Governance - Grad School Costs Support for Technical AIS Research
vincentweisser avatar

Vincent Weisser

over 1 year ago

is this project still seeking funding or un-related to this one? https://manifund.org/projects/gabriel-mukobi-summer-research

AI Alignment Research Lab for Africa
vincentweisser avatar

Vincent Weisser

over 1 year ago

glad to hear and awesome to see this initiative!

Compute and other expenses for LLM alignment research
vincentweisser avatar

Vincent Weisser

over 1 year ago

Might be worth keeping it open for more donations if requested?

Transactions

ForDateTypeAmount
Making 52 AI Alignment Video Explainers and Podcasts11 months agoproject donation50
AI Safety Research Organization Incubator - Pilot Program12 months agoproject donation200
AI Safety Research Organization Incubator - Pilot Program12 months agoproject donation277
AI Safety Research Organization Incubator - Pilot Program12 months agoproject donation500
Scaling Training Process Transparencyabout 1 year agoproject donation150
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation10
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation790
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation1000
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation210
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 1 year agoproject donation500
Manifund Bankabout 1 year agodeposit+500
Manifund Bankabout 1 year agodeposit+500
Manifund Bankabout 1 year agodeposit+1000
Manifund Bankabout 1 year agodeposit+1000
Manifund Bankabout 1 year agodeposit+300
Exploring novel research directions in prosaic AI alignmentabout 1 year agoproject donation200
Manifund Bankabout 1 year agodeposit+200
Manifund Bankabout 1 year agomana deposit+10
MATS Programabout 1 year agoproject donation300
MATS Programabout 1 year agoproject donation500
Manifund Bankabout 1 year agodeposit+500
Manifund Bankabout 1 year agodeposit+300
Empirical research into AI consciousness and moral patienthoodover 1 year agoproject donation50
Empirical research into AI consciousness and moral patienthoodover 1 year agoproject donation70
Run five international hackathons on AI safety researchover 1 year agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+200
AI Alignment Research Lab for Africaover 1 year agoproject donation150
AI Alignment Research Lab for Africaover 1 year agoproject donation100
AI Alignment Research Lab for Africaover 1 year agoproject donation150
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)over 1 year agoproject donation150
Manifund Bankover 1 year agodeposit+500
Introductory resources for Singular Learning Theoryover 1 year agoproject donation50
Holly Elmore organizing people for a frontier AI moratoriumover 1 year agoproject donation100
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor over 1 year agoproject donation50
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation150
Activation vector steering with BCIover 1 year agoproject donation150
Manifund Bankover 1 year agodeposit+500
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation70
Alignment Is Hardover 1 year agoproject donation70
Introductory resources for Singular Learning Theoryover 1 year agoproject donation70
Manifund Bankover 1 year agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation100
Compute and other expenses for LLM alignment researchover 1 year agoproject donation100
Optimizing clinical Metagenomics and Far-UVC implementation.over 1 year agoproject donation100
Apollo Research: Scale up interpretability & behavioral model evals researchover 1 year agoproject donation160
Apollo Research: Scale up interpretability & behavioral model evals researchover 1 year agoproject donation250
Run five international hackathons on AI safety researchover 1 year agoproject donation250
Holly Elmore organizing people for a frontier AI moratoriumover 1 year agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)over 1 year agoproject donation400
Discovering latent goals (mechanistic interpretability PhD salary)over 1 year agoproject donation40
Scoping Developmental Interpretabilityover 1 year agoproject donation45
Scoping Developmental Interpretabilityover 1 year agoproject donation1000
Scoping Developmental Interpretabilityover 1 year agoproject donation455
Joseph Bloom - Independent AI Safety Researchover 1 year agoproject donation250
Joseph Bloom - Independent AI Safety Researchover 1 year agoproject donation100
Joseph Bloom - Independent AI Safety Researchover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+1000
Agency and (Dis)Empowermentover 1 year agoproject donation250
Manifund Bankover 1 year agodeposit+2000
<e083e3b0-a131-4eaa-8a83-6a146a196432>over 1 year agoprofile donation100
Medical Expenses for CHAI PhD Studentover 1 year agoproject donation43
<03fac9ff-2eaf-46f3-b556-69bdee303a1f>over 1 year agoprofile donation50
Manifund Bankover 1 year agodeposit+900
Manifund Bankover 1 year agodeposit+100