Not fundedGrant
$0raised

Project summary

Horizon Events is a non-profit dedicated to advance AI safety research and development through events and related efforts.

This proposal aims to fund the project in 2025. It consists of the AI Safety Unconference, Guaranteed Safe AI Seminars, AI safety events in Montréal, the weekly AI Safety Events and Training newsletter, specific collaborations, and potentially more.

What are this project's goals? How will you achieve them?

Technical events

  • Guaranteed Safe AI Seminars 

    • 12 seminars from field experts with community Q&A (in 2024 we had Yoshua Bengio, Steve Omohundro, Rafael Kaufmann, Miyazono, etc).

  • AI Safety Unconference 2025 (successor to VAISU)

    • 3 days, online; with custom event app, enabling creating and reviewing sessions, poster sessions, schedule voting, matchmaking, chatting, awards, collective intelligence tooling, ... Collaboratively created with community. Focus on x-risk/catastrophic risk, but open to all AI safety works

    • Target: >500 relevant registered participants

  • Montréal AI Safety R&D Events: 2h events with technical talk followed by mingling.

    • This will complement the Mila and CAISI ecosystem (which has a growing AI safety community)

Knowledge distribution, community support

  • AI Safety Events & Training newsletter (>950 subscribers): Weekly newsletter on upcoming events and training opportunities, sometimes with notes from past events. As of November 2024, we have >950 subscribers. We're doing it in partnership with alignment.dev since 2024Q1.

  • Collaboration with partners (potentially AIGS, Apart Research, Mila/CAISI, etc) and help for events-related projects in AI risk reduction, including those with low budgets

How will this funding be used?

The funding is to employ and provide tooling for the core team to pursue the above projects.

On the lower end (3K + founder self-funding), we will maintain the established projects: AI Safety Unconference 2025, 12 Guaranteed Safe AI Seminars, AI Safety Events and Training newsletter. This can be achieved by the founder working 1.5 days/week, with the grant providing partial support.

On the higher end (>35K), all above projects will be covered, with more team members enabled to work on the projects.

Who is on your team? What's your track record on similar projects?

In 2024, we also had the following team members involved Diego Jiménez (AI strategist and events ops), Arjun Yadav (Generalist and events), Linda Linsefors (Advisor, events and AI safety), Pascal Huynh (Event and interaction design), Nicolas Grenier (Advisor, worlding), and Manuela García (graphic design).

This grant proposal focuses on enabling Orpheus Lummis to work, and may enable further team members (existing or potential) to be active.

Organizational track record

  • AI Safety Unconference at NeurIPS (2018, 2019, 2022): Event series that gathered many important AI safety folks

  • Virtual AI Safety Unconference 2024 (VAISU), with 400 registered participants, 24 talks published, in survey 95% reported net positive value, and we estimated 30-100 new professional connection formed (see report)

  • Guaranteed Safe AI Seminars featuring prominent speakers such as Yoshua Bengio, Steve Omohundro, ... (~230 subscribers)

  • AI Safety Events and Training newsletter growth to >950 subscribers, ~27K post views

  • AI safety reading group at Mila (2018) with David Krueger, …

Some testimonials of previous events:

  • Stuart Armstrong at AISU NeurIPS 2018. – "A great way to meet the best people in the area and propel daring ideas forward."

  • Adam Gleave at AISU NeurIPS 2018 – "The event was a great place to meet others with shared research interests. I particularly enjoyed the small discussion groups that exposed me to new perspectives."

  • Haydn Belfield at AISU NeurIPS 2022 – "This was a fascinating event that was helpful for keeping up with the cutting edge of the field, and for launching collaborations."

  • Aaron Tucker at AISU NeurIPS 2022 – "The 2018 event was extremely helpful for meeting people working on AI Safety, and played a large role in my decision to go to graduate school to work on AI safety."

What are the most likely causes and outcomes if this project fails?

The project may fail because of:

  • Lack of funding

  • Single point of failure with small team, so eg burnout risk or accident risk

  • Poor event participation/engagement

Potential outcomes if failed:

  • Disruption to established event series

  • Reconfiguration or loss of knowledge-sharing platform (950+ newsletter subscribers) 

  • Reduced AI safety event capacity in Montréal (counterfactual impact on Mila/CAISI ecosystem)

  • Gap in technical AI safety community building

What other funding are you or your project getting?

No funding for 2025.

In 2024, most work was self-funded. We also graciously received funding from LTFF for 6 months of GS AI seminars, and for VAISU. Lastly, participants of VAISU collectively donated hundreds of dollars to the project.

Notes

We would like to give all funders the option to be thanked publicly on the Acknowledgement page of our website.

We are happy to discuss. Reply on Manifund, or reach out via email at team@horizonevents.info.

Horizon Events is incorporated as, a tax-exempt non-profit in Canada (# 15845360). We otherwise support donations via credit card, wire, etc.

Manugarciaat avatar

Horizon Events has been an excellent catalyst for strengthening the community, knowledge transfer, and networking within AI Safety. I have personally benefited from meeting people with whom I currently work in this field, and I've applied to some opportunities because I learned about them through the newsletter.

🍉

Chris Leong

about 1 month ago

I'll vouch for the quality of the AI Safety Events & Training newsletter.

I guess the main point I'd like clarity on is their plan for increasing distribution of this newsletter.

orpheus avatar

Orpheus Lummis

about 1 month ago

@casebash

Thanks for your comment and question.

Our strategy to increase distribution of the newsletter:
- Increase the quality of the newsletter such that the organic growth rate increases
- Contact team members of safety teams of major AI labs and academic centers (supernodes) to share it on their internal communications channels
- Share it on the AI Alignment Slack (#events), and potentially other popular events-related channels
- Nudge maintainers of onboarding resources for new AI safety researchers to include it as a recommended resource

This is tentative and open to feedback. We've grown to ~1K subscribers so far with organic growth but the resource would likely be beneficial to at least 2-3x more people. We would attempt proactive growth in 2025.

🦁

Steve Omohundro

about 1 month ago

Horizon Events has put on several important AI Safety events that I've been involved with which brought people together and led to an exchange of ideas.

rkauf avatar

Rafael Kaufmann

about 1 month ago

Horizon serves a critical knowledge dissemination function for the AI safety community and I appreciate the high bar set by Orpheus and team.

adityaarpitha avatar

Aditya Arpitha Prasad

about 1 month ago

I know orpheus and am excited for these projects, since he has a proven track record already. I also have had personal use with his projects like AI Safety Events - https://www.aisafety.com/events-and-training