This seems like an overall good idea, and I strongly recommend funding this to at least the 1-month Seminar level.
A few people have floated this kind of program to widen the funnel for people that might want to work on AI safety research with the hopes of kickstarting the involvement of people who otherwise might not know how to get working, or what concepts that existing researchers - even marginal ones - would find very basic.
A 1-month version would likely be best-in-class due to a relative lack of comparable programs - itself a problem! - though I think that a 1-year version might be overambitious and risk burning out or disengaging scholars if not done extremely well and carefully.
All the same, I've worked with Mateusz for a period of time and been part of a what turned into a very small category theory reading group with him, and I think he's very well-suited to this approach. AI safety - especially the kind of AI safety that looks like attempts to find a solution to alignment rather than a dozen ad-hoc patches to existing LLMs - suffers badly from a lack of serious research groups, and this project looks to me like it would be at least half as promising per person as MATS is and maybe 3/4 as promising as PIBBSS, both of which I've been a part of, which have similar mission statements, and which have been funded at higher levels - and both frequently claim that they want to see cousin orgs founded!
I would donate substantially to this if I had piles of tech or crypto money, but sadly I do not. I hope that other people who do have piles of tech or crypto money will hear me and donate in my place. If were a grantmaker I would almost certainly be directing grant funds to this endeavor.