5

Run five international hackathons on AI safety research

ActiveGrant
$10,950raised
$50,000funding goal

Project summary

We're seeking six months of support for a project manager to coordinate and execute international technical hackathons in AI safety with Apart Research.

Despite the growing importance of alignment research, there remains a shortage of researchers in this field. As alignment research agendas become increasingly relevant and applicable to real-world scenarios, there is a pressing need for skilled researchers and consultants to ensure AI safety across various institutions, organizations, and companies.

Both the hackathons, our tech stack, the research, and the team's responsibilities have grown alongside the field. Expanding the team will ensure the hackathons are able to grow sustainably.

Our Alignment Jam hackathons, initiated in November 2022, have drawn over 900 participants across 13 hackathons, resulting in 168 research projects submitted at over 69 events in more than 30 locations. These events are organized both virtually and locally and enable the exposition of talent from across the world. At Apart Research, we work closely with the most talented teams, incubating them into researchers through an academic publication track. Our hackathons have led to research being published in reputable academic venues.

Read more about the hackathons at alignmentjam.com.

What are this project's goals and how will you achieve them?

Host five research hackathons in technical fields of alignment. The current plan is 1) evals, 2) AI safety research entrepreneurship, 3) safety benchmarking, 4) interpretability, and 5) AI safety standards. The order is not set in stone but these are both important, relevant for the future research needs, and approachable.

How will this funding be used?

  • $10,000 for prizes for the hackathons

  • $35,000 for the project manager

  • $5,000 for marketing, software, compute, travel, fiscal sponsorship, and other misc. costs

Who is on your team and what's your track record on similar projects?

Esben Kran (CEO) and Fazl Barez (CTO) have published research from the hackathons (apartresearch.com) that were hosted with the core team and previous employees. From surveys, we have seen an 8.7/10 net promoter score, a self-reported 9.15% increase in AIS career probability after participation, and in our testimonials, participants mention many unique ways it changes their perspective on AI safety.

Esben founded Apart, developed AI Safety Ideas (aisi.ai), has been a speaker at EAGx conferences on entering AI safety, researched brain-computer interfacing, was a data science consultant for researchers, and was lead data scientist in a previous startup. He is on the board of EA Denmark.

Fazl is finishing his PhD at Edinburgh / Oxford. He is an FLI and Turing fellow and has previously worked at Amazon and Huawei Research, CSER, and Data Lab. He also works with AI Safety Hub and is advisor to EA Edinburgh.

What are the most likely causes and outcomes if this project fails? (premortem)

Cause: We hire a program manager that is not able to take over the project. Outcome: The hackathons will not live up to their potential and the team will spend too much time managing the new hire. Mitigation: We use the A method for hiring, go through multiple evaluation rounds, and we use test tasks for the final candidates. We also provide multiple full days with Esben to transfer ownership technically and conceptually.

Cause: Info hazardous research emerges. Outcome: AI safety research is used to commit illicit activities or contributes to capabilities. Mitigation: All hackathons are hosted on our own platform (see e.g. alignmentjam.com/jam/interpretability) and we mark and retract unsafe projects and communicate with the teams about the best way to move forward with the project (2 major cases until now).

Otherwise, the main causes for failure arise from lack of team capacity. With our track record, the interest we have seen, and the research that has emerged from the hackathons, the expected failure modes for other new hackathons do not seem likely.

What other funding are you or your project getting?

We are receiving a speculation grant from SFF that supports Apart's operations, partly including the hackathons. We are in conversation about general funding for Apart.

esbenkran avatar

Esben Kran Christensen

about 1 year ago

Update: We have supported further hackathons by partnering with existing institutions within AI safety. The Manifund funding is used to partially support the hackathons that have not been fully funded in partnerships.

Thank you to @RenanAraujo, @AntonMakiievskyi and @yashvardhan for supporting the project.

donated $500
yashvardhan avatar

Yashvardhan Sharma

about 1 year ago

I'm mostly deferring to @RenanAraujo 's testimonial here along with a brief look at the current work of this project. As a fellow community builder, I think hackathons are a really solid way to a) incentivize people to learn about alignment and b) for people already interested, to people to test their fit for it. I think I like the idea of publishing the best ideas from the hackathon as a way to formalize people's work from the hackathons. The process of taking a hackathon concept to a published paper provides great intellectual enrichment, while the tangible published work can also benefit participants' careers.

Not sure which of these already exists, but to further encourage participation and career development, some things that I'd recommend are:

  • Offer mentorship and support to participants to further develop their projects after the hackathon ends. This helps ensure the best ideas don't just fizzle out after the competition ends.

  • Connect participants with relevant job opportunities in the AI Safety space. Having hackathons lead to jobs or at least relevant connections for collaboration/mentorship would likely give incentives for top talent to participate in them.

  • Track the career progress of hackathon participants over time. Collecting metrics on promotions, new jobs, and other advancements that resulted from hackathon participation.

  • Share success stories of participants who advanced their careers through hackathon projects.

esbenkran avatar

Esben Kran Christensen

about 1 year ago

@yashvardhan Thank you for the support.

  1. We have the Apart Lab program that helps hackathon teams to finish their project and publish them in top-tier academic venues within ML (and soon, contribute directly to governance),

  2. People that go through the Apart funnel have received jobs in Apart, at ARC evals and Oxford but...

  3. ...we unfortunately haven't had the capacity to track everyone's career path so it's not comprehensive data at all.

  4. Similar to the above point, though we have concrete stories of 1) perspective shifts on AI safety, 2) career opportunities, 3) hackathon projects consistently used as application material for other programs, 4) empowering people to kickstart their personal projects, 5) using it as an opportunity to run new research projects they would otherwise not have run, and much else. It's quite interesting to see what participants' experiences are!

donated $5,000
RenanAraujo avatar

Renan Araujo

over 1 year ago

  • TLDR: I’m contributing to this project based on a) my experience running one of their hackathons in April 2023, which I thought was high quality, and b) my excitement to see this model scaled, as I think it has an outsized contribution to the talent search pipeline for AI safety. I’d be interested in seeing see someone with a more technical background evaluating the quality of the outputs, but I don’t think that’s the core of their impact here.

  • My experience with Apart

    • I ran an AI governance hackathon as part of my work at Condor Camp, an organization that does AI safety talent search focused on Brazilian university students.

    • We were experimenting with models different than the camp that could potentially be more cost-effective. We considered doing a hackathon, but the overhead of coming up with good questions, finding speakers/mentors, and overall putting together the intellectual infrastructure would have been too much for us. Apart Research solved that bottleneck.

    • We ran a hackathon (technically, an ideathon since it was on governance) with ~30 participants. This helped some Condor Camp alumni keep engaged with AI safety beyond the camp, and ultimately “reactivated” one of them who wasn’t working with AI governance. That participant led a team, received an honorary prize, and ended up using the hackathon project to get into a CHERI fellowship.

  • Why I’m excited about scaling this model

    • I think Apart’s approach solves a bottleneck for community builders/talent searchers, making a somewhat robust model (hackathon-type competitions) much more easily accessible.

    • It also reduces the cost of intellectual work for everyone: mentors and evaluators work on the best projects from all over the world, instead of having to repeat the work for every single iteration.

    • As a result, I expect them to help the community broaden our surface area considerably, finding talent more cost-effectively.

  • Challenges and concerns

    • While I’m excited about the talent search and engagement aspect, and the AI governance questions seemed relevant and on point, I’m less sure the quality of the output itself.

    • Perhaps focusing too much on publishing the output is a suboptimal move, considering it might not be that high quality. On the other hand, they’re getting external feedback from conferences and peer reviewed journals, so this might easily be an upside rather than a concern. Also, it’s a great incentive for participants to have the expectation of seeing their work published.

    • I’d be interested in takes from folks with more technical expertise about the quality the outputs. So far, my impression is that they at the very least have pretty good evaluators like Neel Nanda (it’d be useful to have evaluators listed on their website, if possible)

    • I didn’t donate more than 5k because that’s 10% of my pot and I’m expecting to donate to other opportunities where I’ll have more counterfactual impact. But I wanted to chip in here to potentially incentivize others to donate to Apart.

esbenkran avatar

Esben Kran Christensen

over 1 year ago

Thank you for the kind words Renan. To comment on your concerns:

  • For anyone interested in evaluating the projects' quality, you can visit the projects page, visit the tag on LessWrong, and see the published research on the Apart website. I do not wish to misrepresent the judges' opinions but they've generally been pleasantly surprised at the quality of the work given the hackathon's duration.

  • The academic publication pipeline is in our perspective underrated as a deadline-based, output-focused research incubation process within AI safety. I should write this into a post at some point but see e.g. a few of the research product outputs that such a process creates besides a simple publication and peer review below. We also expect to incentivize more forum posts.

donated $5,000
AntonMakiievskyi avatar

Anton Makiievskyi

over 1 year ago

I heard great things about Esben, the previous hackathon went great it seems, and the price to run more rounds is relatively low. I'm excited to see this application and hope it's going to get funded fast

🍊

Erin Robertson

over 1 year ago

I hosted 3 Apart research hackathons in the London EA Hub, and the events were highly successful! One of the teams’ work from the hackathon was cited on the OpenAI blog. We found that the hackathons were a perfect way of running technical safety content whilst not needing to put in loads of effort to mentor and guide the attendees. The Apart set-up was great and made it really easy to cater to our users.