Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Support our mission. Read more

The market for grants

Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

Get started
Manifox
All projectsCommentsDonations
'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

about 22 hours ago

@jeff3454 thanks for your donations! Can you please (if you'd like) email me on connor.axiotes@gmail.com - we'd love to say thanks.

'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

about 22 hours ago

@Austin thank you! We appreciate it.

'Making God': a Documentary on AI Risks for the Public
Austin avatar

Austin Chen

1 day ago

@Connoraxiotes Just wanted to say that I very much appreciate this level of transparency and openness about your thinking & finances; I think it's super helpful for others embarking on similar projects!

'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

2 days ago

@michaeltrazzi hey!

All in all just above $17,000. 9/10 people on set also included myself, Mike, Gary and our film photographer.

A Netflix shoot is usually minimum $20k, and we want our shoots to be at that level as our aim is to break into film festivals and then streaming services. That's why we had 4 cameras! (Three FX6s and one FX3).

So I would pay this price again though because we have something that is an intriguing interview and super cinematic looking. Which was our purpose with this documentary.

It was a particularly expensive shoot because:

  • We had less than one week notice to sort the whole shoot and as we are in SF, fly over to NYC and hire a local crew.

  • Because Gary Marcus was coming to NYC and no longer has an office here, we had to pay for a filming location (when hitherto we've been using people's offices and homes).

  • Our AirBnB was expensive because there was a chance we might have had to house crew and do some b-roll filming there, so it had to suit both those things.

  • We had to ship extra luggage on our flights.

I will note that I've been quite successful at getting price reductions on things like crew and gear hire. And I'll continue to fight for value. Mike is also quite experienced with these bigger shoots so he has a good intuition for what should cost what.

Regarding the burn, we also raised $50,000 privately in the last few weeks. So although the interview a significant cost, we thought it was worth it. BUT to finish the documentary we still need to raise a lot.

Michael you should come help us fundraise aha! We could use your expertise.

[Costings below if you'd like a look!]

🏅
'Making God': a Documentary on AI Risks for the Public
michaeltrazzi avatar

Michaël Rubens Trazzi

2 days ago

@Connoraxiotes curious: how much did flying to NYC and having 9/10 people on set cost?

With that burn, how many more interviews can you shoot?

Coordinal Research: Accelerating the research of safely deploying AI systems.
🐝

Michael Chen

3 days ago

Relevant: AIs at the current capability level may be important for future safety work

'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

3 days ago

Progress update

'Making God' Update [12th May 2025] - Gary Marcus Interview, NYC

***Watch our exclusive teaser clip of the interview on X/Twitter and LinkedIn***

We spent the last couple weeks in New York and hired a full crew (around 9/10 people on set) to film a professional cinematic interview with Gary Marcus.

EVN General Support Application
🐹

Elizabeth Van Nostrand

3 days ago

Work funded with this grant:
- extended my work on chaos theory, which would otherwise have run out of money in August (https://acesounderglass.com/2024/09/20/applications-of-chaos-saying-no-with-hastings-greer/ , https://acesounderglass.com/2024/11/01/11924/)
- my vo2max research was covered by a client, however tutoring on the write up was covered by this grant (https://acesounderglass.com/2025/03/09/11954/)
- luck based medicine updates (https://acesounderglass.com/2025/04/11/journal-of-null-results-ezmelt-sublingual-vitamins/, including a prediction market on the outcome, https://acesounderglass.com/2024/12/01/luck-based-medicine-no-good-very-bad-winter-cured-my-hypothyroidism/ )
- unpublished drafts on cults, abusive relationships, and high investment groups
- AI research tool comparisons: https://acesounderglass.com/2024/10/04/ai-research-assistants-competition-2024q3-tie-between-elicit-and-you-com/

EVN General Support Application
🐹

Elizabeth Van Nostrand

3 days ago

This eventually led to https://www.lesswrong.com/posts/son5eEGymm4h856J9/estimating-the-benefits-of-a-new-flu-drug-bxm

'Making God': a Documentary on AI Risks for the Public
Seldon avatar

Seldon

3 days ago

🏴‍☠️ We are staunch supporters of creating a global movement for existential security and, if this is a success, it will be one of the best ways to kickstart it. We will be watching from the sidelines!

'Making God': a Documentary on AI Risks for the Public
Martian-Moonshine avatar

Stephan Wäldchen

3 days ago

This is a great initiative! Sounds like the crew knows what they are doing!

The new pope is also into AIS, maybe you could get him to interview? :D

Arkose may close soon
🐤

Vael Gates

4 days ago

Stating interest in pledging $8k if we're close to the funding bar!

Split Personality Training
mariushobbhahn avatar

Marius Hobbhahn

4 days ago

Funding with $2000 to get the project off the ground.

I have talked to Florian about this project during the last MATS cohort presentation day. I felt like his conceptual considerations were good, and the motivation makes sense.

I have no clear ability in favor or against his ability to execute projects quickly, which is why I'm keeping it at $2k.

I might consider more funding if there are good early results or other strong evidence of progress that I can easily verify. I'd recommend trying to sprint to a 4-6 week MVP and publishing or at least writing up and privately sharing the results.

AI forecasting and policy research by the AI 2027 team
🍄

Girish Sastry

5 days ago

I'm excited for more thoughtful and informed public discussion of AI futurism.

Out of This Box: AI Safety Musical
Martian-Moonshine avatar

Stephan Wäldchen

5 days ago

@evelynciara Hey Evelyn, thanks for the interest! I think bringing it really on tour is a challenge, since we are all non-professionals who do this as a side project, and the overhead to organise something like this is considerable.
We definitely plan to make the license available though and are happy to give support to groups that plan a staging. The whole concept was to make it easy to learn and straightforward.

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Just to clarify -- our initial outreach email is only very coarsely personalised (e.g. based on whether they are in academia vs industry). I'm describing the pitch I would give somebody on a 1:1 call.

Arkose may close soon
CarmenCondor avatar

Carmen Csilla Medina

6 days ago


@Arkose , Sounds like a great way to go about! Though I assume the initial filtering needs to be at least moderately strong since such individually tailored outreach can be time-consuming (but a good investment).

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Unfortunately this varies a lot depending on who I'm speaking with, so it's hard to summarise.

I agree that the "career coaching" frame is not always appropriate, especially for academics. Often, I find it useful to emphasise both the potential for positive impact and the simple legitimacy of the work -- for many, it's useful to highlight that this is a serious field of research which can be published in top conferences and which can get funded. This often involves some more technical discussion of the areas of overlap with their research; I find AI safety is now broad enough there is often some area of overlap. With professors especially, I will often discuss any currently-open funds which might be relevant to their work, and encourage them to check our opportunities page when seeking funding in the future.

Arkose may close soon
CarmenCondor avatar

Carmen Csilla Medina

6 days ago

@Arkose, Super interesting, thank you.
If it's not a trade secret: What is your pitch to those who are not self-selected or referred? (E.g a pitch about applying their skills to make a positive impact in the world)

I don't know this target group very well but I assume they tend to already have a stable job so "career coaching" would not necessarily appeal to them. However, I also know that many mid-career professionals experience some existential crisis if they feel that their work is not making a positive impact in the world.

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Unfortuanately we don't track this information well. I was able to get some data on 56% of our calls (mostly the direct outreach calls). Of these, 90% were in the US, UK, or Europe. This means we've had 17 calls with researchers and engineers outside of these areas, including in China, Korea, and Singapore. There may be some inaccuracy in these statistics as it's not a key metric for us, but I do expect them to be broadly indicative.

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Hi Carmen, great question!

We reach out directly to researchers who've submitted to top AI conferences as well as speaking with those who are already interested in AI safety (evia referrals or direct applications through our website).

62% of our calls are sourced from direct outreach via email to researchers and engineers who've had work accepted to NeurIPS, ICML, or ICLR. As assessed by us after the call, 46% of the professionals on these calls had no prior knowledge of AI safety, and a further 25% were 'sympathetic' (e.g. had heard vague arguments before, but were largely unfamiliar). On these calls, we focus on an introduction to why we might be concerned about large-scale risks from AI before discussing technical AI safety work relevant to their background, and covering ways they could get involved.

The remainder of our calls come from a variety of sources, but are broadly more familiar with AI safety, coming from places like the MATS program or 80,000 Hours. We identify those who are in need of support primarily through self-selection, but also through a referral process from these organizations. On these calls, our value-add is having an in-depth understanding of the needs of AI safety organisations and recommending tailored, specific next steps to get involved.

All projectsCommentsDonations
'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

about 22 hours ago

@jeff3454 thanks for your donations! Can you please (if you'd like) email me on connor.axiotes@gmail.com - we'd love to say thanks.

'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

about 22 hours ago

@Austin thank you! We appreciate it.

'Making God': a Documentary on AI Risks for the Public
Austin avatar

Austin Chen

1 day ago

@Connoraxiotes Just wanted to say that I very much appreciate this level of transparency and openness about your thinking & finances; I think it's super helpful for others embarking on similar projects!

'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

2 days ago

@michaeltrazzi hey!

All in all just above $17,000. 9/10 people on set also included myself, Mike, Gary and our film photographer.

A Netflix shoot is usually minimum $20k, and we want our shoots to be at that level as our aim is to break into film festivals and then streaming services. That's why we had 4 cameras! (Three FX6s and one FX3).

So I would pay this price again though because we have something that is an intriguing interview and super cinematic looking. Which was our purpose with this documentary.

It was a particularly expensive shoot because:

  • We had less than one week notice to sort the whole shoot and as we are in SF, fly over to NYC and hire a local crew.

  • Because Gary Marcus was coming to NYC and no longer has an office here, we had to pay for a filming location (when hitherto we've been using people's offices and homes).

  • Our AirBnB was expensive because there was a chance we might have had to house crew and do some b-roll filming there, so it had to suit both those things.

  • We had to ship extra luggage on our flights.

I will note that I've been quite successful at getting price reductions on things like crew and gear hire. And I'll continue to fight for value. Mike is also quite experienced with these bigger shoots so he has a good intuition for what should cost what.

Regarding the burn, we also raised $50,000 privately in the last few weeks. So although the interview a significant cost, we thought it was worth it. BUT to finish the documentary we still need to raise a lot.

Michael you should come help us fundraise aha! We could use your expertise.

[Costings below if you'd like a look!]

🏅
'Making God': a Documentary on AI Risks for the Public
michaeltrazzi avatar

Michaël Rubens Trazzi

2 days ago

@Connoraxiotes curious: how much did flying to NYC and having 9/10 people on set cost?

With that burn, how many more interviews can you shoot?

Coordinal Research: Accelerating the research of safely deploying AI systems.
🐝

Michael Chen

3 days ago

Relevant: AIs at the current capability level may be important for future safety work

'Making God': a Documentary on AI Risks for the Public
Connoraxiotes avatar

Connor Axiotes

3 days ago

Progress update

'Making God' Update [12th May 2025] - Gary Marcus Interview, NYC

***Watch our exclusive teaser clip of the interview on X/Twitter and LinkedIn***

We spent the last couple weeks in New York and hired a full crew (around 9/10 people on set) to film a professional cinematic interview with Gary Marcus.

EVN General Support Application
🐹

Elizabeth Van Nostrand

3 days ago

Work funded with this grant:
- extended my work on chaos theory, which would otherwise have run out of money in August (https://acesounderglass.com/2024/09/20/applications-of-chaos-saying-no-with-hastings-greer/ , https://acesounderglass.com/2024/11/01/11924/)
- my vo2max research was covered by a client, however tutoring on the write up was covered by this grant (https://acesounderglass.com/2025/03/09/11954/)
- luck based medicine updates (https://acesounderglass.com/2025/04/11/journal-of-null-results-ezmelt-sublingual-vitamins/, including a prediction market on the outcome, https://acesounderglass.com/2024/12/01/luck-based-medicine-no-good-very-bad-winter-cured-my-hypothyroidism/ )
- unpublished drafts on cults, abusive relationships, and high investment groups
- AI research tool comparisons: https://acesounderglass.com/2024/10/04/ai-research-assistants-competition-2024q3-tie-between-elicit-and-you-com/

EVN General Support Application
🐹

Elizabeth Van Nostrand

3 days ago

This eventually led to https://www.lesswrong.com/posts/son5eEGymm4h856J9/estimating-the-benefits-of-a-new-flu-drug-bxm

'Making God': a Documentary on AI Risks for the Public
Seldon avatar

Seldon

3 days ago

🏴‍☠️ We are staunch supporters of creating a global movement for existential security and, if this is a success, it will be one of the best ways to kickstart it. We will be watching from the sidelines!

'Making God': a Documentary on AI Risks for the Public
Martian-Moonshine avatar

Stephan Wäldchen

3 days ago

This is a great initiative! Sounds like the crew knows what they are doing!

The new pope is also into AIS, maybe you could get him to interview? :D

Arkose may close soon
🐤

Vael Gates

4 days ago

Stating interest in pledging $8k if we're close to the funding bar!

Split Personality Training
mariushobbhahn avatar

Marius Hobbhahn

4 days ago

Funding with $2000 to get the project off the ground.

I have talked to Florian about this project during the last MATS cohort presentation day. I felt like his conceptual considerations were good, and the motivation makes sense.

I have no clear ability in favor or against his ability to execute projects quickly, which is why I'm keeping it at $2k.

I might consider more funding if there are good early results or other strong evidence of progress that I can easily verify. I'd recommend trying to sprint to a 4-6 week MVP and publishing or at least writing up and privately sharing the results.

AI forecasting and policy research by the AI 2027 team
🍄

Girish Sastry

5 days ago

I'm excited for more thoughtful and informed public discussion of AI futurism.

Out of This Box: AI Safety Musical
Martian-Moonshine avatar

Stephan Wäldchen

5 days ago

@evelynciara Hey Evelyn, thanks for the interest! I think bringing it really on tour is a challenge, since we are all non-professionals who do this as a side project, and the overhead to organise something like this is considerable.
We definitely plan to make the license available though and are happy to give support to groups that plan a staging. The whole concept was to make it easy to learn and straightforward.

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Just to clarify -- our initial outreach email is only very coarsely personalised (e.g. based on whether they are in academia vs industry). I'm describing the pitch I would give somebody on a 1:1 call.

Arkose may close soon
CarmenCondor avatar

Carmen Csilla Medina

6 days ago


@Arkose , Sounds like a great way to go about! Though I assume the initial filtering needs to be at least moderately strong since such individually tailored outreach can be time-consuming (but a good investment).

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Unfortunately this varies a lot depending on who I'm speaking with, so it's hard to summarise.

I agree that the "career coaching" frame is not always appropriate, especially for academics. Often, I find it useful to emphasise both the potential for positive impact and the simple legitimacy of the work -- for many, it's useful to highlight that this is a serious field of research which can be published in top conferences and which can get funded. This often involves some more technical discussion of the areas of overlap with their research; I find AI safety is now broad enough there is often some area of overlap. With professors especially, I will often discuss any currently-open funds which might be relevant to their work, and encourage them to check our opportunities page when seeking funding in the future.

Arkose may close soon
CarmenCondor avatar

Carmen Csilla Medina

6 days ago

@Arkose, Super interesting, thank you.
If it's not a trade secret: What is your pitch to those who are not self-selected or referred? (E.g a pitch about applying their skills to make a positive impact in the world)

I don't know this target group very well but I assume they tend to already have a stable job so "career coaching" would not necessarily appeal to them. However, I also know that many mid-career professionals experience some existential crisis if they feel that their work is not making a positive impact in the world.

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Unfortuanately we don't track this information well. I was able to get some data on 56% of our calls (mostly the direct outreach calls). Of these, 90% were in the US, UK, or Europe. This means we've had 17 calls with researchers and engineers outside of these areas, including in China, Korea, and Singapore. There may be some inaccuracy in these statistics as it's not a key metric for us, but I do expect them to be broadly indicative.

Arkose may close soon
Arkose avatar

Arkose

6 days ago

@CarmenCondor Hi Carmen, great question!

We reach out directly to researchers who've submitted to top AI conferences as well as speaking with those who are already interested in AI safety (evia referrals or direct applications through our website).

62% of our calls are sourced from direct outreach via email to researchers and engineers who've had work accepted to NeurIPS, ICML, or ICLR. As assessed by us after the call, 46% of the professionals on these calls had no prior knowledge of AI safety, and a further 25% were 'sympathetic' (e.g. had heard vague arguments before, but were largely unfamiliar). On these calls, we focus on an introduction to why we might be concerned about large-scale risks from AI before discussing technical AI safety work relevant to their background, and covering ways they could get involved.

The remainder of our calls come from a variety of sources, but are broadly more familiar with AI safety, coming from places like the MATS program or 80,000 Hours. We identify those who are in need of support primarily through self-selection, but also through a referral process from these organizations. On these calls, our value-add is having an in-depth understanding of the needs of AI safety organisations and recommending tailored, specific next steps to get involved.