Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Robert-Kralisch avatarRobert-Kralisch avatar
Robert Kralisch

@Robert-Kralisch

Organizer at AI Safety Camp and independent alignment researcher

aisafety.camp
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I'm a research coordinator and part of the team running AI Safety Camp. I'm interested in field building in the AI Safety space, including getting more competent organizers into the field and supporting outreach work to inform the public about AI risks.
My research interests focus on both Simulator Theory of LLMs as per janus, and on how to design more inherently interpretable yet flexible neurosymboloc cognitive architectures, backed by strong Agent Foundations.
My background is in Cognitive Science, including computational mathematics, neural networks, and philosophy of mind, as well as a thorough engagement with the MIRI perspective on aligning superintelligent systems.

Comments

Safe AI Germany (SAIGE)
Robert-Kralisch avatar

Robert Kralisch

9 days ago

Fully agree with Jonathan.
Writing as organizer of AI Safety Camp and having talked with Jessica, this is the most promising fieldbuilding project for AI Safety in Germany that I have seen to date.

I absolutely buy their assessment about the untapped STEM talent pool that Germany has to offer to technical AI Safety work.
Since German culture disincentivizes risk taking in one's career, it is all the more important to have a strong and central organisation capable of connecting national talent into a community and offering them a clear perspective into career prospects in the field.

It also strikes me as an excellent platform for outreach about AI Safety topics to the general public in Germany.

After looking through SAIGE's plans and their theory of change, talking to Jessica, and reading the other comments here, I strongly recommend this project for funding.