I am consistently impressed with the work that comes out of EffiSciences along with how principled their thinking around the problems are. See e.g. Charbel-Raphael's Against Almost Every Theory of Change of Interpretability for such an example. See also their impact estimations for the AI safety unit which I believe rival many significantly more expensive programs.
Co-director at Apart Researchhttps://kran.ai
$0 in pending offers
@yashvardhan Thank you for the support.
We have the Apart Lab program that helps hackathon teams to finish their project and publish them in top-tier academic venues within ML (and soon, contribute directly to governance),
People that go through the Apart funnel have received jobs in Apart, at ARC evals and Oxford but...
...we unfortunately haven't had the capacity to track everyone's career path so it's not comprehensive data at all.
Similar to the above point, though we have concrete stories of 1) perspective shifts on AI safety, 2) career opportunities, 3) hackathon projects consistently used as application material for other programs, 4) empowering people to kickstart their personal projects, 5) using it as an opportunity to run new research projects they would otherwise not have run, and much else. It's quite interesting to see what participants' experiences are!
Update: We have supported further hackathons by partnering with existing institutions within AI safety. The Manifund funding is used to partially support the hackathons that have not been fully funded in partnerships.
This seems like a high-EV project and working with FB in Apart Research, I have been impressed by his work ethic and commitment to real impact. One of my worries in the establishment of a new lab is that it could get caught in producing low-impact research, but with him at the helm and the support of Krueger, there is little doubt that this lab will take paths towards concrete efforts to reducing existential risk from AI.
Additionally, the support of the Torr Vision Group provides the credibility and support that other new labs would need to build up over a longer period of time, potentially speeding up the path to impact for the proposed project. I do not specialize in grant-making and provide this donation as a call-to-action for other grant-makers to support the project.
Thank you for the kind words Renan. To comment on your concerns:
For anyone interested in evaluating the projects' quality, you can visit the projects page, visit the tag on LessWrong, and see the published research on the Apart website. I do not wish to misrepresent the judges' opinions but they've generally been pleasantly surprised at the quality of the work given the hackathon's duration.
The academic publication pipeline is in our perspective underrated as a deadline-based, output-focused research incubation process within AI safety. I should write this into a post at some point but see e.g. a few of the research product outputs that such a process creates besides a simple publication and peer review below. We also expect to incentivize more forum posts.
The published research at the target venue, such as Barone et al. (2023) at ACL
The arXiv preprint (often used as the up-to-date version), such as Foote et al. (2023)
A website for your research project, such as apartresearch.com/research/inverse-scaling-code
A presentation of the research project for the hackathon participants, such as Jason's talk on Memory Editing Limitations
Any conference-specific content, such as a lightning talk, poster, poster session recording, invited presentation, etc.