8

Stampy’s AI Safety Info

Not fundedGrant
$0raised

Project summary

aisafety.info is a website started by Rob Miles that hosts hundreds of answers to questions about existential risk from artificial intelligence. These answers contain introductions to basic AI safety arguments and concepts, summaries of technical content, and responses to objections.


The human-written content in the web interface covers the most common questions. To answer the “long tail” of uncommon questions, we’re also building an automated distiller. This chatbot searches a database of alignment literature and summarizes the results with citations to the sources. We’ve been making progress on improving the quality of the dataset and minimizing hallucinations.

What are this project's goals and how will you achieve them?

We aim to provide a reliable and accessible source of information about existential risk from AI. This resource caters to all audiences, whether they are new to the topic, looking to explore in more depth, seeking answers to their objections, or hoping to get involved with research or other projects. We think improving people’s understanding of AI risk helps improve the odds of a good outcome in a way that is relatively robust to different assumptions about the strategic landscape.

To create and maintain this resource, we’re hoping to:

  • Keep expanding and improving our written content. A team of distillation fellows and volunteers has been editing the approximately three hundred answers on site, and hundreds more are in progress. We intend to increase contact with experts to help ensure the site reflects humanity’s best current understanding of AI safety.

  • Refine our prototype chatbot and make it a major component of the user interface.

  • Redesign and improve the front end, using A/B testing to figure out how to present information in ways that make it easier to take in and share.

  • Develop an API that lets external websites embed our search function. The Campaign for AI Safety and other groups have reached out to us about strategic partnerships; these projects, as well as the upcoming rebuilt version of aisafety.com, would be using our content in their social media campaigns.

How will this funding be used?

We will use it to fund further distillation fellowships similar to the current and previous ones, to continue improving the content. With larger amounts of funding, we may also hire software developers to continue working on the code for the interface and chatbot, and a CEO to direct them.

Who is on your team and what's your track record on similar projects?

Rob Miles is a leading AI safety communicator and will be our quality control manager. Steven Kaas has been leading our team of distillation fellows and developing systems to ensure quality and engagingness of content. Our distillation fellows have a wide range of backgrounds and have been working on content for the past several months in a group of five to ten people working remotely. plex has over 10 years of community management experience, and has been the glue holding the project together for the last two and a half years. Chris Canal has entrepreneurship, software management, and ML experience. He is ready to iterate the website into a much better state using user research and analytics, while also leading the dev team. We have several excellent developers who have been doing what they can in their free time.


Our work has happened on Rob Miles’s Discord, on GitHub, and on various Google Docs. For an indication of what future work will look like, you can look at those places, at the site itself, or our roundups of newly posted content.

What are the most likely causes and outcomes if this project fails? (premortem)

It’s possible that the project won’t get enough traction to justify the cost, e.g. if the quality never becomes high enough to make a full launch. (That said, we believe there has been visible steady progress so far.) It could “fail” in a different sense if presenting views that are common in the AI safety community causes people to make bad decisions. This could happen because these views turn out wrong or badly conceptualized, or because they influence people in an unintended way, e.g. through convincing them that AI has great power without also convincing them that it’s hard to get alignment right.

What other funding are you or your project getting?

We have received around $46k from SHfHS and $54k from LTFF, both for running content writing fellowships. We have been offered a $75k speculation grant from Lightspeed Grants for an additional fellowship, and made a larger application to them for the dev team which has not been accepted. We have also recently made an application to Open Philanthropy.

Rachel avatar

Rachel Weinberg

about 1 year ago

This looks pretty cool! How much engagement has aisafety.info gotten? I'd be curious about concrete analytics, like total unique visitors, average time spent, etc. And more qualitatively, what types of users does this attract and how does it help them? Like, is the theory of change similar to Rob Miles' channel, just an alternative interface/entry point, or is it useful to a different class of people or in a different way?

Relatedly how has it been promoted/how do you plan to promote it?

StevenK avatar

Steven Kaas

about 1 year ago

We've had 15k unique visitors in the past 30 days, but I think that number doesn't say much about the longer term. We've recently been focusing on reworking the front page and improving the content, without making an effort to draw more attention to the site other than through some link posts like this, but we're now starting to promote it more with a "soft launch" announcement (not counted in the 15k) to LessWrong and the EA Forum, to be followed by a "hard launch" with a video addressed at Rob's large YouTube audience.

Other examples of ways in which the site will be visible are our collaborations with aisafety.com, the Campaign for AI Safety, and two others to come, as well as probably a mention in a Vox article soon.

We have an article on our goals and an article on our theory of change. I don't know enough about the strategy behind Rob's YouTube channel to compare it to aisafety.info, but it seems like the latter might have advantages in e.g. more easily creating content on a wide variety of topics, adding a lot of links to the most relevant resources, and catering to people who prefer text to video.

KabirKumar avatar

Kabir Kumar

about 1 year ago

I think this could be really useful and the folks at Stampy seem to be doing a lot of good work.