@Chris Leong Strategy is a synthesis of ideas and finding the right people to work with to create useful projects and plans. I doubt I can do much to synthesize a strategy org without this level of in-person collaboration.
I have been accepted into AI Conclave‘s full fellowship, a 1-month summit, but require funding to cover room and board. AI Conclave is a month long on-sight campus to deeply understand and shape AI
AI Conclave: https://aiconclave.io/
Discuss many ideas
Brainstorm on potential risks, trajectories, and solutions
Develop a better understanding of many different aspects of the field
Potentially find collaborators for future projects
Strategy is a synthesis of ideas and finding the right people to work with to create useful projects and plans. I doubt I can do much to synthesize a strategy org without this level of in-person collaboration.
The fee covers room and board at the event, with full access to all parts of the event and programs.
Full Fellowship (Full Program 4-Week Participation) 7,000 USD
Visiting Fellowship (2-Week Participation) 4,000 USD
Visiting Fellowship (1-Week Participation) 2,500 USD
What are the most likely causes and outcomes if this project fails? (premortem)
Not getting funding in time.
Currently none. The funding would have to be issued quickly, in order to fully prepare before the event begins.
Organizational: I have founded several tech startup companies, lead several teams, including as Project Manager, been the founding member of several others companies, and completed most of a Masters of Business Administration.
Mechanism Design: I have experience working on Mechanism Design and Consensus Engineering, such as my work at MOAT (creating the first decentralized energy token for the BSV network), Algoracle (worked on the White Paper for the first Oracle Network on Algorand), designing a form of decentralized voting for companies, assisting with incentivizing philanthropy at Project Kelvin, and was a Senior Cybersecurity Analyst where I audited blockchain contracts for security vulnerabilities.
AI Safety: I took the AI Safety Fundamentals (both Technical and Governance) in 2021. I worked on building a simulation for finding cooperation between governments on AIS when staying at the Centre For Enabling EA Learning & Research (CEEALAR). I received a grant from the Centre for Effective Altruism and Effective Ventures to further my self study of Alignment Research. I attended SERI MATS in the Fall, under John Wentworth's online program. I have also read extensively on the topic, and contributed to various discussions and blog posts, one of which won a Superlinear prize.
Other: I also TA'd and helped design the curriculum for the first University blockchain class while in undergrad, and have assisted in mentoring and offering consultation for new people wanting to get into the field.
I also manage AI Safety Strategy discord: https://discord.gg/e8mAzRBA6y
The basics of my research: https://docs.google.com/document/d/1zaHcw7i7ZxP31M73mm6r6tOg2BtKtztND51KRTMLeGo/edit?usp=sharing
https://www.lesswrong.com/posts/2SCSpN7BRoGhhwsjg/using-consensus-mechanisms-as-an-approach-to-alignment (won superlinear prize)