Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
ltwagner28 avatarltwagner28 avatar
Lawrence Wagner

@ltwagner28

Lawrence Wagner brings over ten years of experience in project management, cybersecurity, and entrepreneurship, including designing and running training programs. He previously served as a Research Manager with the ML Alignment & Theory Scholars (MATS) program, where he coordinated AI safety research projects and supported scholars and mentors through the research lifecycle. He has also conducted research at UC Berkeley focused on AI governance, risk management, and the intersection of technical systems with policy and cybersecurity considerations.

https://www.linkedin.com/in/lawrence-wagner/
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Projects

Systematic Probing of Exploit Chains and Governance in Multi-Agent Tool

pending admin approval

Reducing Risk in AI Safety Through Expanding Capacity.

Comments

Reducing Risk in AI Safety Through Expanding Capacity.
ltwagner28 avatar

Lawrence Wagner

6 days ago

Final report

Description of subprojects and results, including major changes from the original proposal

Description of subprojects and results, including major changes from the original proposal

We’re getting close to launching the first BASE Fellowship cohort, and this process has been both intense and incredibly meaningful for us.

We received 1,127 applications from people around the world who want to work on AI safety. Reading through them has been a reminder of how much talent and motivation exist, and how many people are actively looking for a path into this field.

From this pool, we selected 345 finalists. We’re now working with 15 mentors who have stepped up to support the fellows, and we’re moving into the final stage of the selection process. On March 29, we’ll narrow the pool further and send coding and written assessments to those advancing. From there, we’ll finalize the cohort and begin the fellowship in early April 2026.

One thing that stood out to us is just how strong the applicant pool has been. The level of interest and quality exceeded what we expected, which has made the selection process more rigorous, but also reinforced how important it is to build pathways like this into AI safety.

The overall structure of the program remains the same as originally proposed. The main change has been the amount of time and care required to evaluate candidates well at this scale.

Spending breakdown

  • $2,025 — Tools and infrastructure for running the selection process (coding assessments, Airtable, Zoom, Gmail, etc.)

  • $9,000 — Executive Director cost of living support during the build and selection phase

Reducing Risk in AI Safety Through Expanding Capacity.
ltwagner28 avatar

Lawrence Wagner

3 months ago

Progress update

What progress have you made since your last update?

We received over 1,100 applications during our fellowship application window (December 15 to January 9). On January 11, we started an online AI safety and alignment training track using the ARENA curriculum for our Slack community, with 40 participants. We are also seeing strong early interest in our Research Arm, with 103 people signed up for our January 12 info session.

What are your next steps?

We are reviewing applications now and will begin shortlisting candidates this week. In parallel, we are launching the Research Arm and onboarding the first set of projects and mentors.

Is there anything others could help you with?

Yes. We need funding, additional mentors, and advisors, especially people who can supervise projects in AI safety, AI security, or AI governance.

Reducing Risk in AI Safety Through Expanding Capacity.
ltwagner28 avatar

Lawrence Wagner

3 months ago

Since launching our fellowship application on LinkedIn on Dec 16, the post has reached 100K+ views with 271 reposts, and we’ve received 150+ applications so far. We also now have 10 mentors confirmed for the Spring cohort.

Transactions

ForDateTypeAmount
Manifund Bank20 days agowithdraw25
Reducing Risk in AI Safety Through Expanding Capacity.28 days agoproject donation+25
Manifund Bankabout 2 months agowithdraw1000
Reducing Risk in AI Safety Through Expanding Capacity.2 months agoproject donation+1000
Manifund Bank3 months agowithdraw10000
Reducing Risk in AI Safety Through Expanding Capacity.3 months agoproject donation+10000