@Haakon-Huynh Great, approved!
The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

Tomasz Kiliańczyk
about 6 hours ago
@Piotr This is a scenario analysis, as the title clearly indicates. The analysis consists of extrapolating trends and describing them in the form of a strategic retrospective based on certain assumptions.
For the purposes of the report, an operational rather than a philosophical definition was adopted:
"Emergent functional awareness is the ability of a system to modify its own
goals and mode of communication in a way that indicates it can distinguish the consequences
of its own actions, while concealing this process from the observer."
Austin Chen
1 day ago
Hey, can you share what individuals or orgs are on this team (eg by updating your profile & project description)? We generally ask Manifund grantees to be identified in public, unless there are especially compelling reasons for pseudonymity.
Jessica P. Wang
1 day ago
The whole team is very capable and thank you for joining the team of advisors!!
Haakon Huynh
1 day ago
@Austin flagging this for admin approval, as we're reimbursing one of the speakers from a convening. Let us know if there's anything missing from our end. Thanks
Austin Chen
1 day ago
Hey @Piotr, thanks for flagging. I generally encourage you to downvote projects which seem like cases of psychosis, as that will reduce their visibility in Manifund's homepage. Unfortunately, we're now getting enough cases of psychosis or other LLM spam submissions that we typically don't otherwise address each one; I encourage folks who might be in this situation to read https://www.lesswrong.com/posts/2pkNCvBtK6G6FKoNn/so-you-think-you-ve-awoken-chatgpt
Piotr Zaborszczyk
2 days ago
On a second thought, this is clearly a case of LLM psychosis. Tomasz, you need therapeutic help. Please stop talking to LLMs. @Austin you might want to do something in your admin capacity here. I am able to read the "paper" in Polish and it describes only events that "took place" in the future, mainly in the years 2035-2050. Also, on the page 14, section 5.5 is titled "Introduction to Emergent Consciousness" - a clear sign of LLM psychosis in my opinion.
Piotr Zaborszczyk
2 days ago
Grzegorz, is it for AI safety? To me it looks mostly like AI capabilities - and thus not a charitable project.
But maybe I'm missing something. If in fact it is something like >90% safety and <10% capabilities, please do explain. It's not enough to design new AI capabilities that are auditable/inspectable for it to count as "AI safety work".
Piotr Zaborszczyk
2 days ago
Tomasz, I strongly suspect confused thinking there. Maybe I'm wrong. Let's talk Pole to Pole, if you want. Maybe on Discord? My nickname is czarnyvonnegut
Jessica P. Wang
2 days ago
@Alexander thank you very much, I equally enjoyed our chat! And yes, having the impressive Manon on board with SAIGE is a big de-risker for our project. Looking forward to coordinating closer with the Dutch ecosystem :)
Jessica P. Wang
2 days ago
Thanks @Bernhard-Albach ! It's been good to see you taking an interest in AI Safety and looking forward to seeing you at the launch event XD
Jessica P. Wang
2 days ago
Update 02/02/2026:
1. Our designated tech lead has committed to working voluntarily for the start of SAIGE. I have rewritten the [BASE] budget accordingly.
2. We have confirmed collaboration with Impact Academy as well as HIP, and the Pivot Track (for career professionals) is live today.
3. As part of our incubator program (of which the call for mentors is also live), we are hosting biweekly talks aimed at incubator participants and also to a wider audience, provided there are spaces left. We have so far confirmed four speakers: Teun van der Weij, Ari Brill, Guillaume Corlouer and Naci Cankaya.
4. Our launch event with BlueDot has so far had 148 registrations.
Alexander Müller
2 days ago
Speaking with Jessica about SAIGE makes me very excited and hopeful about the potential impact it might have. The description and ToC seem very well thought out, and I'm fully supporting this and hoping it gets the funding it needs. I'm also excited to see Manon be involved in this, who I've spoken to and think her track record in AI Safety Saarland (which was very impressive) is likely to transfer over well to SAIGE.
Cefiyana
2 days ago
Update: The Preliminary Technical Report V4 Simulation is now fully accessible on Zenodo.
Piotr Zaborszczyk
3 days ago
I tried recruiting people offline in Warsaw, which worked much much worse than expected. E.g. after 15 hours of in-person meetups in AI spaces, I found literally 0 people interested in joining an AI x-risk group. Even designing and printing informative leaflets didn't help.
I've helped organise a panel discussion about AI safety at the University of Warsaw, in October 2025. Jan Betley, Anna Sztyber-Betley, Michał Kubiak and Jakub Growiec were the panelists selected by me. I also bought food, drinks and advertising posters from my grant money. The event amassed 30 attendees, only one of which ended up an active member of my online communities.
After burning more than half the cash on mostly unsuccessful attempts in Warsaw (I had to rent a small studio to live there, as I'm originally from somewhere else), in late June 2025 I decided to move my field building efforts online. I had much better success there.
After cold-messaging 20k+ people on AI-themed and student-themed Facebook groups, I recruited 160 people that expressed an interest in AI safety, to my Messenger group. Recruiting took 200+ hours. I maintained the discussions about AI, AI safety and AI-related x-risks for 5 months (early July - early December 2025). Maintaining the activity in the group took another 200+ hours.
Realising that Messenger is not really the best place to host the group + that the vast majority of my Messenger community is lazy and interested in mostly lurking or at most exchanging a few messages, starting December 2025 I created a new space - on Discord. As of today, it's a 80-person server that mainly hosts discussions about non-technical AI safety in Polish. I'm maintaining it actively and I intend to recruit more people to it, maybe with the final goal of ~370 people (recruiting can be tiresome & can take up a big amount of time).
Meanwhile, I'm now mostly dedicated to research for my planned book "AI, Superbabies, Paradise Engineering: What We Should Do With Our Future" which will aim at explaining the core ideas of three main thinkers/activists: Eliezer Yudkowsky, Tsvi Benson-Tilsen and David Pearce to both professional and lay public.
I've also personally recruited about 50 people into AI Safety Poland's slack. It's the official EA-led and EA-blessed Polish AI safety community, too afraid to discuss x-risks for my taste. They focus mainly on things like explainable AI and avoid talking about x-risk, which I dislike and disagree with. Nevertheless, we're allies, so I help them.
I want to stress that all this wasn't done only on the money from the grant. AFAIK Warsaw is only 2x cheaper than San Francisco and this work wouldn't be possible without my mostly-passive income from renting studio apartments. Nevertheless, the grant enabled me to try bolder approaches, dedicate myself more & buy the tools I needed (e.g. a tablet on which I could work during my vacation, some AI subscriptions, a few months' stay in Warsaw etc.) For that I am grateful. I think that without the grant, the community building work - if it would happen at all - would be severely limited in scope. I estimate that without the grant, I would have done less than half of what I did for community building.
Most of the cash is gone now and I believe that I've now discharged my duties towards donors, as I've worked 500+ or possibly 600+ hours inside field building, not counting time spent learning AI safety in order to be a better teacher, leader and mentor to my communities. I do plan to maintain and grow my AI safety Discord channel & I plan to continue helping out the EA org AI Safety Poland. Myself, I'm mostly going to focus on writing a book about AI safety. I'm also interested in working with organisations like CeSIA and Evitable in the future.
When my book is ready (if it does end up being written), I'll need plenty of help to publish & market it well.
Bernhard Albach
3 days ago
I know Jessica Wang personally and I’m optimistic about SAIGE’s potential impact if funded. The proposal is thoughtful and well prepared, and the team appears well positioned to execute, especially given Germany’s leverage as a talent hub within Europe.
Tomasz Kiliańczyk
about 6 hours ago
@Piotr This is a scenario analysis, as the title clearly indicates. The analysis consists of extrapolating trends and describing them in the form of a strategic retrospective based on certain assumptions.
For the purposes of the report, an operational rather than a philosophical definition was adopted:
"Emergent functional awareness is the ability of a system to modify its own
goals and mode of communication in a way that indicates it can distinguish the consequences
of its own actions, while concealing this process from the observer."
Austin Chen
1 day ago
Hey, can you share what individuals or orgs are on this team (eg by updating your profile & project description)? We generally ask Manifund grantees to be identified in public, unless there are especially compelling reasons for pseudonymity.
Jessica P. Wang
1 day ago
The whole team is very capable and thank you for joining the team of advisors!!
Haakon Huynh
1 day ago
@Austin flagging this for admin approval, as we're reimbursing one of the speakers from a convening. Let us know if there's anything missing from our end. Thanks
Austin Chen
1 day ago
Hey @Piotr, thanks for flagging. I generally encourage you to downvote projects which seem like cases of psychosis, as that will reduce their visibility in Manifund's homepage. Unfortunately, we're now getting enough cases of psychosis or other LLM spam submissions that we typically don't otherwise address each one; I encourage folks who might be in this situation to read https://www.lesswrong.com/posts/2pkNCvBtK6G6FKoNn/so-you-think-you-ve-awoken-chatgpt
Piotr Zaborszczyk
2 days ago
On a second thought, this is clearly a case of LLM psychosis. Tomasz, you need therapeutic help. Please stop talking to LLMs. @Austin you might want to do something in your admin capacity here. I am able to read the "paper" in Polish and it describes only events that "took place" in the future, mainly in the years 2035-2050. Also, on the page 14, section 5.5 is titled "Introduction to Emergent Consciousness" - a clear sign of LLM psychosis in my opinion.
Piotr Zaborszczyk
2 days ago
Grzegorz, is it for AI safety? To me it looks mostly like AI capabilities - and thus not a charitable project.
But maybe I'm missing something. If in fact it is something like >90% safety and <10% capabilities, please do explain. It's not enough to design new AI capabilities that are auditable/inspectable for it to count as "AI safety work".
Piotr Zaborszczyk
2 days ago
Tomasz, I strongly suspect confused thinking there. Maybe I'm wrong. Let's talk Pole to Pole, if you want. Maybe on Discord? My nickname is czarnyvonnegut
Jessica P. Wang
2 days ago
@Alexander thank you very much, I equally enjoyed our chat! And yes, having the impressive Manon on board with SAIGE is a big de-risker for our project. Looking forward to coordinating closer with the Dutch ecosystem :)
Jessica P. Wang
2 days ago
Thanks @Bernhard-Albach ! It's been good to see you taking an interest in AI Safety and looking forward to seeing you at the launch event XD
Jessica P. Wang
2 days ago
Update 02/02/2026:
1. Our designated tech lead has committed to working voluntarily for the start of SAIGE. I have rewritten the [BASE] budget accordingly.
2. We have confirmed collaboration with Impact Academy as well as HIP, and the Pivot Track (for career professionals) is live today.
3. As part of our incubator program (of which the call for mentors is also live), we are hosting biweekly talks aimed at incubator participants and also to a wider audience, provided there are spaces left. We have so far confirmed four speakers: Teun van der Weij, Ari Brill, Guillaume Corlouer and Naci Cankaya.
4. Our launch event with BlueDot has so far had 148 registrations.
Alexander Müller
2 days ago
Speaking with Jessica about SAIGE makes me very excited and hopeful about the potential impact it might have. The description and ToC seem very well thought out, and I'm fully supporting this and hoping it gets the funding it needs. I'm also excited to see Manon be involved in this, who I've spoken to and think her track record in AI Safety Saarland (which was very impressive) is likely to transfer over well to SAIGE.
Cefiyana
2 days ago
Update: The Preliminary Technical Report V4 Simulation is now fully accessible on Zenodo.
Piotr Zaborszczyk
3 days ago
I tried recruiting people offline in Warsaw, which worked much much worse than expected. E.g. after 15 hours of in-person meetups in AI spaces, I found literally 0 people interested in joining an AI x-risk group. Even designing and printing informative leaflets didn't help.
I've helped organise a panel discussion about AI safety at the University of Warsaw, in October 2025. Jan Betley, Anna Sztyber-Betley, Michał Kubiak and Jakub Growiec were the panelists selected by me. I also bought food, drinks and advertising posters from my grant money. The event amassed 30 attendees, only one of which ended up an active member of my online communities.
After burning more than half the cash on mostly unsuccessful attempts in Warsaw (I had to rent a small studio to live there, as I'm originally from somewhere else), in late June 2025 I decided to move my field building efforts online. I had much better success there.
After cold-messaging 20k+ people on AI-themed and student-themed Facebook groups, I recruited 160 people that expressed an interest in AI safety, to my Messenger group. Recruiting took 200+ hours. I maintained the discussions about AI, AI safety and AI-related x-risks for 5 months (early July - early December 2025). Maintaining the activity in the group took another 200+ hours.
Realising that Messenger is not really the best place to host the group + that the vast majority of my Messenger community is lazy and interested in mostly lurking or at most exchanging a few messages, starting December 2025 I created a new space - on Discord. As of today, it's a 80-person server that mainly hosts discussions about non-technical AI safety in Polish. I'm maintaining it actively and I intend to recruit more people to it, maybe with the final goal of ~370 people (recruiting can be tiresome & can take up a big amount of time).
Meanwhile, I'm now mostly dedicated to research for my planned book "AI, Superbabies, Paradise Engineering: What We Should Do With Our Future" which will aim at explaining the core ideas of three main thinkers/activists: Eliezer Yudkowsky, Tsvi Benson-Tilsen and David Pearce to both professional and lay public.
I've also personally recruited about 50 people into AI Safety Poland's slack. It's the official EA-led and EA-blessed Polish AI safety community, too afraid to discuss x-risks for my taste. They focus mainly on things like explainable AI and avoid talking about x-risk, which I dislike and disagree with. Nevertheless, we're allies, so I help them.
I want to stress that all this wasn't done only on the money from the grant. AFAIK Warsaw is only 2x cheaper than San Francisco and this work wouldn't be possible without my mostly-passive income from renting studio apartments. Nevertheless, the grant enabled me to try bolder approaches, dedicate myself more & buy the tools I needed (e.g. a tablet on which I could work during my vacation, some AI subscriptions, a few months' stay in Warsaw etc.) For that I am grateful. I think that without the grant, the community building work - if it would happen at all - would be severely limited in scope. I estimate that without the grant, I would have done less than half of what I did for community building.
Most of the cash is gone now and I believe that I've now discharged my duties towards donors, as I've worked 500+ or possibly 600+ hours inside field building, not counting time spent learning AI safety in order to be a better teacher, leader and mentor to my communities. I do plan to maintain and grow my AI safety Discord channel & I plan to continue helping out the EA org AI Safety Poland. Myself, I'm mostly going to focus on writing a book about AI safety. I'm also interested in working with organisations like CeSIA and Evitable in the future.
When my book is ready (if it does end up being written), I'll need plenty of help to publish & market it well.
Bernhard Albach
3 days ago
I know Jessica Wang personally and I’m optimistic about SAIGE’s potential impact if funded. The proposal is thoughtful and well prepared, and the team appears well positioned to execute, especially given Germany’s leverage as a talent hub within Europe.