Looks exciting. My personal view is that there's a lot of progress waiting to be made on theoretical/agent foundations research. The quality of the program will of course depend a lot on the quality of fellows; I'm curious if there are many people already on your radar, or if you think you have good leads there.
A few other thoughts:
- I think trying to persuade people that the alignment problem is hard is often counterproductive. The mindset of "I need to try to solve a extremely difficult problem" is not very conducive to thinking up promising research directions. More than anything else, I'd like people to come out of this with a sense of why the alignment problem is interesting. Happy to talk more about this in a call.
- Some of the selection criteria seem a bit counterproductive. a) "Decent team players, non-disruptive to the group cohesion" seems like a bad way to select for amazing scientists, and might rule out some of the most interesting candidates. And b) "would care about saving the world and all their friends if they thought human extinction was likely" seems likely to select mainly for EA-type motivations which IMO also make people worse at open-ended theoretical research. Meanwhile c) "highly technically skilled" is reasonable, but I care much more about clarity of thinking than literal technical skills.
If the organizers have good reasons to expect high-quality candidates I expect I'd pitch in 5-10k.