This is actually a really cool idea which might help people form estimates and convince more people to think about these risks. One worry I always have with projects like this is in relation to maintenance and how much continual updating a project like this would require.
Interactive p(doom) constructor
Longer description of your proposed project
Interactive consensus is exciting to me because I want a knowledge base of forecasts and threat models but wikis don't give the end user enough freedom. I've idly thought about a more general version of this for a while https://github.com/quinn-dougherty/epistemics-webdev-projects/issues/3 but now the squigglehub graphql endpoint is mature enough to support a specifically p(doom) flavored version with quite low effort. I will describe two versions, an MVP that technically provides the functionality and a polished one that could in principle have a shot at going viral on twitter (epistemic status: not on twitter myself). Both versions are highly dependent on squigglehub, though not all users of this product need to be able to write squiggle code.
- MVP: You select from a list of external sources like from here https://squigglehub.org/models/berekuk/p-doom-roundupand assign weights to each source. So if you think Hendrycks is twice as trustworthy as Christiano and you don't trust anyone else at all, you'd assign 2/3 * 0.8 + 1/3 * 0.5 to get a p(doom) of 0.7. Then, when you and your friends post their p(doom) estimates on squiggle hub, you can import those as well and put them in the mix. Prefilled imports are tagged by year when possible, and the app can either enforce that you don't mix estimates of different years or mix them in a principled way (like with laplace succession before the sum) (there is also an "eventually" option omitting the year). You can also supply a free form p(doom) inline, without writing it on squiggle hub then importing, but this won't work as well with composition and averaging.
- Polished: Everything looks and feels nicer. Support for saving and sharing models. Support for inferring a cdf of doom over time for any user that supplies aggregates for multiple years.
Aggregating consensus but agnostic whether a quantity is a point estimate or a distribution opens up richer epistemic environment. Another upside of each estimate being an export of a squiggle file is that we support arbitrary model intricacy for free.
Describe why you think you're qualified to work on this
Served on the Squiggle team, several other software projects or companies. You can reach out to ozzie@quantifieduncertainty.org (founder of guesstimate and squiggle).
Other ways I can learn about you
https://lesswrong.com/users/quinn-dougherty(I was "stupid tshirt" kidney guy at ManiFest)
How much money do you need?
$5-12k. Where do these numbers come from: my 0.5 * FTE * 1 month at dayjob number is about 7k. I'm considering 5k to be 2 weeks FTE or 4 weeks 0.5 * FTE because I tend to discount EAs.
Links to any supporting documents or information
I made this template when I was at QURI https://github.com/quantified-uncertainty/next-app-with-squiggle and it'll be my starting point.
Estimate your probability of succeeding if you get the amount of money you asked for
$5k: 95% sure MVP gets done and 50% sure polished gets done
$12k: polished version goes up to 90% (takes twice as long overall)
$16k: will take extra slack to spend more time on epistemic public goods, webdev projects.
Quinn Dougherty
9 months ago
in terms of how to operationalize / decompose p(doom), I'd like to base it on the ai views widget by tetraspace and rob bensinger https://ai-views-snapshots.tetratopia.foundation/