@Austin I'd like to request that Jonas's comment be temporarily removed, because the substance in the email Jonas sent is not nearly as concerning as the negative impact of the public comment. Once we have resolved the issues raised in the private email, we can consider reposting the comment, either as is or in a modified form.
$0 in pending offers
Thank you for your input, Jonas. I'm interested to understand the nature of your significant reservations. One of the appealing aspects of Manifund is its decentralized structure, which encourages open dialogue. This helps to counteract the traditional system where funding often depends on personal networks and reciprocal favors.
Would you be willing to share more details privately, given your concerns about public disclosure?
4 months ago
Thank you for the thoughtful questions, Renan.
1- The research agenda is still in formation as a key goal over the next 3-4 months is to further shape the directions and priorities and secure funding from the identified sources. However, I envision a significant portion focusing on interpretability, particularly interpreting reward models learned via reinforcement learning. Additional areas will likely include safe verification techniques, aligning with much of Stuart Russell's work as well as the expert areas of Phil and David.
2- Regarding team composition, we expect at least two existing research fellows to be involved and several PhD students to be hired. Most members will have strong technical backgrounds and solid foundational knowledge in AI alignment literature. We aim to assemble a diverse team with complementary strengths to pursue impactful research directions.
Please let me know if you have any other questions! I'm excited by the potential here and value your perspective.