@NeelNanda thanks for weighing in! Manifund doesn't have a UK entity set up, unfortunately. One thing that might be possible would be to figure out a donation swap where eg you commit to donating $10k via Givewell UK and some US-based person who was planning on giving to Givewell instead donates $10k to this project, and you both take tax deductions for your respective countries.
I work on Manifold & Manifund!https://manifold.markets/Austin
$3,750 in pending offers
@Brent It's not clear to me what successful examples are... which have been impactful for you? I think foreign language and MCATs are two domains where SRS have proven its worth, but outside of that those memorization-heavy domains, the flashcard approach hasn't become popular. It's also damning that most successful people don't rely on SRS, afaict.
I think there's something definitely interesting about the core observation of SRS - "learning happens via repeated exposures to the subject, and we can program that to our benefit." But it also seems to me that "flashcards" are a dead end, UX-wise, given all the research that has gone into them for relatively little adoption. I think there's a lot of space for innovating on other interaction models -- eg in what ways are social feeds like Twitter a spaced repetition system? Or Gmail?
One other random note - for a while, I've wanted a SRS/anki thing that helps me stay on top of my various personal contacts (friends, acquaintances, etc). "Making friends" is a domain which lines up neatly with exponential backoff, I think -- it's easiest to make a friend by spending a lot of time with them in the beginning, and then staying in touch gradually less and less over time.
This grant falls outside of our more established pathways, but I'm excited to approve it anyways, as a small bet on a people-first funding approach (where I think the regranting mechanism shines).
I'm a bit baseline skeptical of SRS/anki, having seen tools-for-thought people push for it but fairly unsuccessfully -- eg I was very excited for Quantum Country but it doesn't seem to have gotten wider adoption, nor personally helped me very much. However, I would be excited to be wrong here, and it's possible that LLMs change the game enough for there to be a good angle of attack!
Approving this project as in line with our mission of advancing technical AI safety.
Thanks to Vincent for getting this project past its initial funding bar!
4 days ago
Approving this project, as Lawrence's work falls squarely within Manifund's cause of advancing technical AI safety!
18 days ago
Also investing a small amount as a show of support for Conflux, though I'd definitely love to see more details eventually :P
To clarify, the Manifold Community Fund payout criteria will be for impact realized between checkpoints, so exclusive of past "impact". The first payout will assess impact from Nov 15-Dec 15 -- so previous eg views of MMP would be excluded, but if an old MMP episode went viral on Dec 1st, then that would count for impact.
Funding this to the minimum ask, mostly because 1) the ask is small, 2) I highly trust two of the people involved (Joel and Vivian), and 3) I want to encourage the existence of Qally's, as I could imagine Manifund itself being a client looking to buy retrospective analyses.
I'm actually not sure how big of an issue Long Covid is -- my uniformed take is "not a big problem". But this mostly stems from my emotional reaction against covid safetyism, and isn't very grounded in factual analysis, so I'm excited to see what the research shows!
Hi Chris! Thanks for posting this funding application. I generally am a fan of the concept of retroactive funding for impactful work (more so than almost anyone I know). However, TAIS isn't my area of specialty, and from where I'm standing it's hard for me to tell whether this specific essay might be worth eg $100 or $1000 or $10000. The strongest signals I see are the 1) relatively high karma counts and 2) engagement by @josephbloom on the article.
I'm putting down $100 of my budget towards this for now, and would be open to more if someone provides medium-to-strong evidence for why I should do so.
about 2 months ago
I'm fairly sure that Scott would be happy to allow you to hold on to your current shares, the caveat that if you don't accept this current offer, he may not make any other assessment or offer in the future.
Hi Adrian! Thanks for submitting this proposal. I'm not actually sure why people are downvoting you -- I do think this kind of project idea is pretty cool, and I would love to see & fund examples of "good rationalist ideas actually making it into production".
That said, in startups, the mantra is "ideas are cheap, execution is everything". To that end, I'd be unsure as a funder if you'd be able to spin up a business around this. A few things:
It seems like you haven't built a lumenator before? I'd suggest trying this just as a proof point of "yes I actually can make & enjoy making hardware"
Validate demand for lumenators! Just because a bunch of EA people have said nice things about them doesn't mean that they would actually buy them; or that the audience extends beyond EA. Before committing to this, see if you can eg pre-sell 10 lumenators to people willing to put down $100 today for a $200 discount on delivery.
The "Tesla Roadster" strategy could make sense here -- even if your goal is to get them <$500 for mass market, to start with you might sell bespoke custom lumenators at $2k to the rich rationalist folks first.
Stop worrying about legal issues, 99.9% of the time this project fails because you can't build lumenators cheaply enough or you fail to find demand.
If you haven't run any kind of side project before, I might start with software -- much cheaper to try and release things, so you learn about the other sides of entrepreneurship (marketing, selling, customer support, biz processes) much faster
Find a cofounder? I'm less sure about this one, but it's standard YC advice, and in my experience projects done in teams have a way of going much farther than projects done solo.
If you actually succeed on 1 & 2, that would be a major update for me towards wanting to invest in your company -- I'd probably invest $10k, at least. Some resources for you:
Approving this! Nuno called this out as one of the projects he was most excited to fund in his regrantor app, and I'm happy to see him commit to this.
I've updated Jonas's comment above. Evan is also retracting his support for this grant, so we will be unwinding his $50k donation and restoring this project to be in the pending state.
(for context: Jonas posted his reservations independent of my grant approval, and within the same minute)
In light of Jonas's post and the fact that this grant doesn't seem to be especially urgent, I'm going to officially put a pause on processing this grant for now as we decide how to proceed. I hope to have a resolution to this before the end of next week.
Some thoughts here:
We would like to have a good mechanism for surfacing concerns with grants, and want to avoid eg adverse selection or the unilateralist's curse where possible
At the same time, we want to make sure our regrantors are empowered to make funding decisions that may seem unpopular or even negative to others, and don't want to overly slow down grant processing time.
We also want to balance our commitment to transparency with allowing people to surface concerns in a way that feels safe, and also in a way that doesn't punish the applicant for applying or somebody who has reservations for sharing those.
We'll be musing on these tradeoffs and hopefully have clearer thoughts on these soon.
Approving this project! It's nice to see a handful of small donations coming in from the EA public, as well as Evan's endorsement; thanks for all your contributions~
Hi Nigel, appreciate you submitting your proposal to Manifund! I think wildfire detection is somewhat outside the scope of projects that our regrantors are interested in, and as thus you're unlikely to hit your minimum funding bar here. (A precise statement of our interests is tricky but the Future Fund's Areas of Interest is a good starting point). Best of luck with your fundraise!
Approving this project as it fits our criteria of "charitable and few downsides". I think publishing a forecast on the effects of a AI treaty could be very helpful. I am more skeptical of "running an open letter urging governments to coordinate to make an AI safety treaty" -- I'd highly encourage working with other players in the AI governance space, as otherwise I expect the impact of an open letter to be ~nil. (Maybe that was already the plan, in which case, great!)
@JordanSchneider Hi Jordan! Good to know about GiveDirectly's ads -- I think that might be a good form factor for Manifund too, as we're currently looking to fundraise. Would love to see the pitch deck (email email@example.com)!
I'm also interested in contributing $5k-$10k of my own regrantor budget; my tentative proposal is that we could send half of our total funding as an unrestricted grant, and the other half as a purchase of advertising time.
Hi Damaris, my best guess is that your application isn't a good fit for Manifund; it's very unclear to me how big data analytics skills are useful for eg AI Safety, or why this skills gap is important to address. Best of luck!
Hi Eden! My best guess is that your project is not a great fit for the Manifund platform; it's very unclear why we should provide charitable funding for your team to acquire a patent (and the requirement for an NDA doesn't help. If you're interested in making your application stronger, I would suggest that you drop your focus on acquiring a patent and just directly move to creating your prototype, and come back when you have a prototype to demo. (That isn't to say that I could promise that the prototype would receive funding, but in any case it would be much more compelling -- see Neuronpedia for an example grant that shipped a prototype before applying)
Approving this grant! The Residency looks like an interesting project; this grant falls within our charitable mission of supporting overlooked opportunities, while not having any notable downside risks.
Hi Lucy! Approving this grant as it fits within our charitable mission and doesn't seem likely to cause any negative effects.
It does look like you have a lot more room for funding; I'm not sure if any of our AI-safety focused regrantors have yet taken the time to evaluate your grant, but if you have a specific regrantor in mind, let me know and I will try to flag them!
Approving this! Best of luck with your research~
Hi Ben, appreciate the application and I'm personally interested in the XST approach here. I have a deep question about whether "founder in residence" as a strategy works at all. I have met a few such "FIR" individuals (usually attached to VC firms), but I'm not aware of any breakout startups in tech that have been incubated this way; they always seem to have been founder-initiated. Some more evidence is that the YC batch where founders applied without ideas seemed to go badly. From Sam Altman:
YC once tried an experiment of funding seemingly good founders with no ideas. I think every company in this no-idea track failed. It turns out that good founders have lots of ideas about everything, so if you want to be a founder and can’t get an idea for a company, you should probably work on getting good at idea generation first.
Of course it's plausible that longtermist startups thrive on different models of incubation than tech ones. Charity Entrepreneurship seems to do fine by finding the individuals first and then giving them ideas to work with?
Also, do you have examples of individuals you'd be excited to bring on for the FIR role? (Ideally people who actually would actually accept if you made them the offer today, but failing that examples of good candidates would be helpful!)
Hi Keith! As a heads up, I don't think your project looks like a good fit for any of the regrantors on our platform (we are primarily interested in AI safety or other longtermist causes), so I think it's fairly unlikely you'll receive funding at this time. Cheers~
Hi Jordan, thanks for posting this application. I'm impressed with the traction ChinaTalk has garnered to date, and think better US-China media could be quite valuable. It seems like Joel has much more context on this proposal and I'm happy to defer to his assessments.
I wanted to chime in with a slightly weird proposal: instead of a grant, could we structure this as a sponsorship or purchase of some kind? Eg:
We could purchase ad slots, either to promote relevant EA ideas & opportunities, or to fundraise for Manifund itself
We could buy a large fixed lot of Substack subscriptions to gift to others
There's some precedent for this kind of funder-grantee interaction -- I believe CEA funded TypeIII Audio by buying up a certain amount of audio content generated for the EA Forum and LessWrong.
Hi Alex! You seem like a smart and motivated individual, and I appreciate you taking the time to apply on Manifund. Despite this, I'm not super excited by this specific proposal; here are some key skepticisms to funding this out of my personal regrantor budget:
I'm suspicious of funding more "research into the right thing to do". I would be more excited to directly fund "doing the right thing" -- in this case, directly convincing university admins to fund AI or bio safety efforts.
As a cause area, I view IIDM a bit like crypto (bear with me): many promising ideas, but execution to date has been quite lackluster. Which is also to say, execution seems to be the bottleneck and I'm more excited to see people actually steering institutions well rather than coming up with more ideas on how to do so. As they say in startup-land, "execution is everything".
My guess is that as a university student, your world has mostly consisted of university institutions, leading you to overvalue their impact at large (compared to other orgs like corporations/startups, governments, and nonprofits). I would be much more excited to see proposals from you to do things outside the university orbit.
I would also guess that getting university admins on board will be quite difficult?
Thanks again for your application!
@MSaksena Thanks for the explanation! I understand that nonprofit funders have their hands tied in a variety of ways and appreciate you outlining why it's in Manifund's comparative advantage to fund this as an independent grant.
Someday down the line, I'd love to chat with the Convergent Research team or related funders (like Schmidt Ventures?) about solving the problem of how to "flexibly commit money to adjacent projects". In the meantime, best of luck with your research and thank you for your service!
Approving this! Excited for Manifund's role here in accelerating the concrete research towards mitigating global catastrophic biorisks.
Hi Miti! In general I'm excited for biosecurity work on these topics and excited that Gavriel likes this grant, and expect to approve this. I just wanted to check in on a (maybe dumb) question: given that Convergent Research seems to be both well-funded and also the primary beneficiaries of Miti's work, why aren't they able to fund this themselves?
From CR's website, they don't have a vast pool of funding themselves, and instead seek to incubate FROs that then get follow-on funding. This seems reasonable; I'd be happy to work out other financial arrangements that make sense here such as via a loan or equity.
For example, Ales estimates this work to raise the chance of unlocking funding by 10%+. In that case, assuming a conservative $10m raise for the FRO, that would make Miti's project worth $1m; and assuming a funder's credit portion of 10% for this, that would indicate a $100k value of the grant made. So eg would Ales/CR/the resulting FRO be willing to commit $100k back to Gavriel's regrantor budget, conditional on the FRO successfully raising money?
I apologize if this seems like a money-grubbing ask; I'm coming into this a bit from a "fairness between funders" perspective and a bit of "sanity checking that the work really is as valuable to CR as purported". Manifund just doesn't have that much money at the moment, so being able to extend our capital is important; and also, I'm excited about using good financial mechanisms to make charitable fundraising much much better (ask me about impact certs sometime!).
Approving as this project is within our scope and doesn't seem likely to cause harm. I appreciate Kabir's energy and will be curious to see what the retrospective on the event shows!
I'm not familiar with Alexander or his work, but the votes of confidence from Anton, Quinn, and Greg are heartening.
Approving as the project seems within scope for Manifund (on longtermist research) and not likely to cause harm.
Hi Johnny, thanks for submitting your project! I've decided to fund this project with $2500 of my own regrantor budget to start, as a retroactive grant. The reasons I am excited for this project:
Foremost, Neuropedia is just a really well-developed website; web apps are one of the areas I'm most confident in my evaluation. Neuropedia is polished, with delightful animations and a pretty good UX for expressing a complicated idea.
I like that Johnny went ahead and built a fully functional demo before asking about funding. My $2500 is intended to be a retroactive grant, though note this is still much less than the market cost of 3-4 weeks of software engineering at the quality of Neuropedia, which I'd ballpark at $10k-$20k.
Johnny looks to be a fantastic technologist with a long track record of shipping useful apps; I'd love it if Johnny specifically and others like him worked on software projects with the goal of helping AI go well.
The idea itself is intriguing. I don't have a strong sense of whether the game is fun enough to go viral on its own (my very rough guess is that there are some onboarding simplifications and virality improvements), and an even weaker sense of whether this will ultimately be useful for technical AI safety. (I'd love if one of our TAIS regrantors would like to chime in on this front!)
Hi Vincent! Thanks for submitting this; I'm excited about the concept of loans in the EA grantmaking space, and appreciate that your finances are published transparently.
I expect to have a list of follow-up questions soon; in the meantime, you might enjoy speaking with the folks at Give Industries, who employ a similar profit-for-good model!
Process for awarding this grant
As Manifund is a relatively new funder, I’d been thinking through examples of impactful work that we’d like to highlight, and VaccinateCA came to mind. I initially reached out and made the offer to Patrick, upon hearing that he had donated $100k of his own money into the nonprofit. Patrick nominated Karl to receive this grant instead, and introduced us; Karl and I met for a video call in early July.
What’s special about this grant to Karl is that it’s retroactive — a payment for work already done. Typically, funders make grants prospectively to encourage new work in the future. I’m excited about paying out this retroactive grant for a few reasons:
I want to highlight VaccinateCA as an example of an extremely effective project, and tell others that Manifund is interested in funding projects like it. Elements of VaccinateCA that endear me to it, especially in contrast to typical EA projects:
They moved very, very quickly
They operated an object level intervention, instead of doing research or education
They used technology that could scale up to serve millions
But were also happy to manually call up pharmacies, driven by what worked well
Karl was counterfactually responsible for founding VaccinateCA, and dedicated hundreds of hours of his time and energy to the effort, yet received little to no compensation.
I’d like to make retroactive grants more of a norm among charitable funders. It’s much easier to judge “what was successful” compared to “what will succeed”, especially for public goods; a robust ecosystem of retroactive grants could allow for impact certs to thrive.
I offered $10k as it felt large enough to meaningfully recognize the impact of VaccinateCA, while not taking up too much of my regrantor budget. I do think the total impact of this was much larger; possibly valued in the hundreds of millions of dollars to the US government, if you accept the statistical value of a life at $1-10m. (It’s unclear to me how large retroactive grants ought to to incentivize good work, and I’d welcome further discussion on this point.) I've set the project to make room for up to $20k of total funding for this, in case others would like to donate as well.
Other tidbits from my conversation with Karl
Q: Are you familiar with the EA movement? If so, what are your thoughts?
A: Yeah, I’ve heard a lot about it. To use the lingo, I’ve been “Lesswrong-adjacent for a while”. Taken to extremes, EA can get you to do crazy things — as all philosophies do. But I really like the approach; mosquito nets make sense to me.
I’d observe that a lot of money is out there, looking for productive uses. Probably the constraining factor is productive uses. Maybe you [Manifund] are solving this on a meta level by encouraging productive uses of capital? Austin: we hope so!
Q: What is Karl up to now?
A: I left my last role at Rippling a few months ago, and am now working on my own startup.
It’s still pretty early, and I’m not yet settled on an idea, but I’m thinking of things related to my work on global payrolls at Rippling. I expect more business will be done cross-border, and using instant payments. Today, putting in a wire is very stressful, and this will be true of more and more things.
My idea is to reduce payment errors: money disappearing when payments go to a false account, or an account that is some other random person’s. This will hopefully reduce payments friction, making international business less scary. The goal is to decreases costs, make it easier to hire people, and cut down on fraud.
Thanks to Lily J and Rachel W for feedback on this writeup.
Hi Vikram, thanks for applying for a grant! The projects you're working on (especially LimbX) look super cool. I'm offering $500 for now to get this proposal past its minimum funding bar; some notes as we consider whether to fund it more:
This kind of deep tech is a bit outside of our standard funding hypothesis (which tends to be more longtermist/EA), and also outside my personal area of expertise (software)
I would be excited about Manifund supporting young, talented individuals (similar to Emergent Ventures); but it's possible this represents a dilution in our focus? My grant to Sophia was similar in size/thesis, but in that case I was personally familiar with Sophia.
I'm also just curious: how did you find out about Manifund?
Thanks for posting this application! I've heard almost universal praise for Apollo, with multiple regrantors expressing strong enthusiasm. I think it's overdetermined that we'll end up funding this, and it's a bit of a question of "how much?"
I'm going to play devil's advocate for a bit here, listing reasons I could imagine our regrantors deciding not to fund this to the full ask:
I expect Apollo to have received a lot of funding already and to soon receive further funding from other sources, given widespread enthusiasm and competent fundraising operations. In particular, I would expect Lightspeed/SFF to fund them as well. (@apollo, I'd love to know if you could publicly list at least the total amount raised to date, and any donors who'd be open to being listed; we're big believers in financial transparency at Manifold/Manifund)
The comparative advantage of Manifund regranting (among the wider EA funding ecosystem) might lie in smaller dollar grants, to individuals and newly funded orgs. Perhaps regrantors should aim to be the "first check in" or "pre-seed funding" for many projects?
I don't know if Apollo can productively spend all that money; it can be hard to find good people to hire, harder yet to manage them all well? (Though this is a heuristic from tech startup land, I'm less sure if it's true for research labs).
Funding this as:
I've previously had the opportunity of cohosting an EA hackathon with Sophia following EAG Bay Area; she was conscientious and organized, and I'd happily cohost something again
I'm personally excited about supporting more concrete software development within the EA sphere, on the margin (compared to eg research papers)
The ask is quite low ($500), and the project promises to be both fast (lasting a week) and soon (by Jul 27); I really like the ethos of moving quickly on a small budget.
I don't have specific insights into Solar4Africa, but I'm curious to see the results!
Hi Haven, thanks for submitting your application! I like that you have an extensive track record in the advocacy and policy space and am excited about you bringing that towards making AI go well.
I tentatively think that funding your salary to set up this org would be fairly similar to funding attempts to influence legislation (though I would be happy to hear if anyone thinks this isn't the case, based on what the IRS code states about 501c3s). That doesn't make it a non-starter for us to fund, but we would scrutinize this grant a lot more, especially as we'd have about a ~$250k cap across all legislative activities given our ~$2m budget (see https://ballotpedia.org/501(c)(3))
Where do you see this new org sitting in the space of existing AI Gov orgs? Why do you prefer starting a new org over joining an existing one, or working independently without establishing an org at all?
Have you spoken with Holly Elmore? Given the overlap in your proposals, a conversation (or collaboration?) could be quite fruitful.
Hi Jeffrey! I do think EA suffers from a lack of inspiring art and good artists, and appreciate that you are trying to fix this. Do you happen to have any photos or links to the pieces that you intend to put on display?
Hi Bruce! I'm a fan of software projects and modeling, and appreciate the modest funding ask. I'm not going to be funding this at this time, but hope you continue to make progress and would love to see what your demo/video looks like when it's ready!
One note on your application, it does use a lot of jargon which makes it harder to understand what you're going to do, reminding me of this passage from Scott:
Another person’s application sounded like a Dilbert gag about meaningless corporate babble. “We will leverage synergies to revolutionize the paradigm of communication for justice” - paragraphs and paragraphs of this without the slightest explanation of what they would actually do. Everyone involved had PhDs, and they’d gotten millions of dollars from a government agency, so maybe I’m the one who’s wrong here, but I read it to some friends deadpan, it made them laugh hysterically, and sometimes they still quote it back at me - “are you sure we shouldn’t be leveraging synergies to revolutionize our paradigm first?” - and I laugh hysterically.
I think concrete examples (or the demo/video you mentioned) would help!
Hey Allison, thanks for submitting this! Upvoting because this looks like a thoughtful proposal and I'm interested in hearing about how the August workshop goes.
I would guess that a $75k minimum funding goal is higher than our regrantors would go for, given that most of our large-dollar regrantors are primarily focused on AI Safety, but I'm curious to hear what our bio or policy regrantors have to say about this kind of project!
Putting down $20k of my regrantor budget for now (though as mentioned, we'll likely structure this as a SAFE investment instead of a grant, once we've finished getting commitments from regrantors)
Thanks for submitting this, Aaron! We really like this kind of concrete object-level proposal, which is ambitious yet starts off affordable, and you have quite the track record on a variety of projects. A few questions:
As this is a project for Lantern Bioworks, would you be open to receiving this as an investment (eg a SAFE) instead of grant funding?
If funded, what do you think your chances of success are, and where are you most likely to fail? (I've set up a Manifold Market asking this question)
Could you link to your Lightspeed application as well?
Conflict of interest note, Aaron was an angel investor in Manifold Market's seed round.
Wanted to call out that Holly has launched a GoFundMe to fund her work independently; it's this kind of entrepreneurial spirit that gives me confidence she'll do well as a movement organizer!
Check it out here: https://www.gofundme.com/f/pause-artificial-general-intelligence
I'm excited by this application! I've spoken once with Holly before (I reached out when she signed up for Manifold, about a year ago) and thoroughly enjoy her writing. You can see that her track record within EA is stellar.
My hesitations in immediately funding this out of my own regrantor budget:
Is moratorium good or bad? I don't have a strong inside view and am mostly excited by Holly's own track record. I notice not many other funders/core EAs excited for moratorium so far (but this argument might prove too much)
Should Holly should pursue this independently, or as part of some other org? I assume she's already considered/discussed this with orgs who might employ her for this work such as FLI or CAIS?
I would be even more excited if Holly found a strong cofounder; though this is my bias from tech startups (where founding teams are strongly preferred over individual founders), and I don't know if this heuristic works as well for starting movements.
Hi Kabir! Unfortunately, I'm pretty skeptical that https://ai-plans.com/ is going to be much used and would not fund this out of my regrantor budget.
This kind of meta/coordination site is very hard to pull off, as it suffers from network effect problems (cf the cold start problem). Without established connections or a track record of successful projects, even if the idea is good (which I'm not judging), the project itself won't hit critical mass. I might change my mind if you demonstrated substantial interest (hundreds of users, or a few very passionate users)
I appreciate that you've coded up your own website (I think?) Kabir, at this stage I would focus not on any specific EA project but rather just becoming a better software developer; apply for internships/jobs.
If you really want to do something "EA/AI Safety-ish" (though I don't think this would be a good rationale), consider just writing criticisms for individual plans and posting them on the EA Forum.
Thanks for the writeup, Adam! I like that the grant rationale is understandable even for myself (with little background in the field of alignment), and that you've pulled out comparison points for this salary ask.
I generally would advocate for independently conducted research to receive lower compensation than at alignment organizations, as I usually expect people to be significantly more productive in an organization where they can receive mentorship (and many of these organizations are at least partially funding constrained).
I share the instinct that "working as an independent researcher is worse than in an org/team", but hadn't connected that to "and thus funders should set higher salaries for at orgs", so thanks for mentioning.
Tangent: I hope one side effect of our public grant process is that "how much salary should I ask for in my application" becomes easier for grantees. (I would love to establish something like Levels.fyi for alignment work.)
Haha yeah, I was working on my writeup:
I generally think it's good that David's work exists to keep EA/longtermist causes honest, even though I have many disagreements with it
I especially like that David is generally thoughtful and responsive to feedback eg on EA Forum and article comments.
In the grand scheme of things, $2k seemed like a very small cost to cover 2 years' worth of future blogging.
On reflection, I might have been too hasty to grant the largest amount, perhaps due to mentally benchmarking against larger grants I've been looking at. At this point in time I might downsize it to $1k if there were a convenient way to do that (and we decided changing the grant). But probably not worth it here given the small sums, except as a potential data point for the future.
Thanks for the writeup, Rachel W -- I think paying researchers in academia so they are compensated more closely to industry averages is good. (It would have been helpful to have a topline comparison, eg "Berkeley PHDs make $50k/year, whereas comparable tech interns make $120k/year and fulltime make $200k/year")
I really appreciate Rachel Freedman's willingness to share her income and expenses. Talking about salary and medical costs is always a bit taboo; it's brave of her to publish these so that other AI safety researchers can learn what the field pays.
We'd love to have other regrantors (or other donors!) help fill the remainder of Rachel Freedman's request; there's currently still a $21k shortfall from her total ask.
Rachel W originally found this opportunity through the Nonlinear Network; kudos to the Nonlinear folks!
Main points in favor of this grant
This grant is primarily a bet on Gabriel, based on his previous track record and his communication demonstrated in a 20min call (notes)
Started Stanford AI Alignment; previous recipient of OpenPhil fieldbuilding grant
His proposal received multiple upvotes from screeners on the Nonlinear Network
I also appreciated the display of projects on his personal website; I vibe with students who hack on lots of personal side projects, and the specific projects seem reasonably impressive at a glance
I don't feel particularly well qualified to judge the specifics of the proposed experiment myself, and am trusting that he and his colleagues will do a good job reporting the results
Process for deciding grant amount
Gabe requested $5000 for this project, but as he's planning to apply to several other sources of funding (and other Nonlinear Network grantmakers have not yet reached out), filling half of that with my regrantor budget seemed reasonable.
Conflicts of interest
I saw from your EA Forum post (https://forum.effectivealtruism.org/posts/hChXEPPkDpiufCE4E/i-made-a-news-site-based-on-prediction-markets) that you were looking for grants to work on this. As it happens, we're working on a regranting program through Manifund, and I might be interested in providing some funding for your work!
A few questions I had:
- How much time do you plan on investing on Base Rate Times over the next few months?
- What has traffic looked like (eg daily pageviews over the last month or so?)
- How do you get qualitative feedback from people who view your site?
Also happy to find time to chat: https://calendly.com/austinchen/manifold
@DamienLaird: Thanks for the update! I'm sorry to hear that you won't be continuing to write, as I've enjoyed your blogging these last few months. As I've conveyed via email, I appreciate the refund offer but think you should keep the investment, as you've already dedicated significant time towards what I consider to be good work.
Best wishes with your next projects!
Hey! I think it's cool that you've already built and shipped this once already -- I'd love to see more prediction sites flourishing! I appreciate that you provided an image of the site too; it looks pretty polished, and the image really helps us understand how the site would function.
Given that the site is already mostly built, it seems like your hardest challenge will be finding users who are excited to participate -- especially if you're targeting the Bulgarian audience, as forecasting is already something of a niche, so Bulgarian forecasting would seem to be a niche within a niche. To that end, I'd definitely recommend conducting user interviews with people who you think might be a good fit (I found the books "The Mom Test" and "Talking to Humans" to really help me get comfortable with user interviews).
A couple questions:
What kind of feedback did your first set of users have on the site.
What do you plan on doing differently this time around to try and get more usage?
Hi Devansh, I very much think the problem of retroactive impact evaluation is quite difficult and am excited to see people try and tackle the area! It's nice to see that you've already lined up three nonprofits (from your local area?) to assess.
Have you already spoken with these nonprofits about assessing their impact? If so, what have their responses been like?
Have you identified the evaluators who will be doing the work of impact assessment? If so, what are their backgrounds like?
Hi Jesus! A Google Sheets add-on for Manifold is definitely not something we'd ever considered before; thanks for suggesting it! I think a lot of professionals spend their time in Google Sheets, and making it easier to access forecasts or use forecasting results in their formulas seems potentially very useful.
Some questions I had:
(As Ernest asked) how specifically would it work? Do you have a mockup or something that would demonstrate it's functionality?
Is there a simpler version of this you could make that would be useful (eg a template Google Sheet with formulas that read from Manifold's API, instead of an add on?)
Who do you think would be using this add on, besides yourself? Have you spoken with them about their use cases?
Hi Ryan, I really love the innovative way you've chosen to use Manifund (as a bidding mechanism between three different projects to allocate ad spend!) And naturally, we're super interested in guidelines to help inform future impact market rounds.
A couple of questions for you:
How did you settle on these three areas (college students, earthquakes, and hurricane forecasts?)
For a project with $500 to spend on ads, how many people would you expect to reach?
Hi Samuel, it's cool to see your commitment to making forecasting fun -- a big part of what I think has made Manifold succeed is an emphasis on ease of use and levity~
A couple questions:
What does your ideal participant look like? Can you point to a few examples of people who are already excited to participate in this?
What kind of impact are you hoping to have, as a result of running these fun events?
Hey Joshua! I've always believed that the comments on Manifold were super helpful in helping forecasters improve their accuracy -- it seemed so obvious so as to not even need testing in an RCT, haha. It's cool to see the amount of rigor you're committing to this idea, though!
Some questions for you:
Based on the different possible outcomes of your experiment, what different recommendations would your project generate for prediction platforms? Eg if you find that comments actually reduced forecasting accuracy somehow, would the conclusion be that Manifold should turn off comments?
What specific forecasting platform would you use (is it one that you'd build/have already built?)
How many participants do you expect to attract with the $10k prize pool? How would you recruit these participants?
Hey Valentin! Always happy to see new proposals for ways to incorporate Manifold where different users spend their time. I'm not a user of Telegram myself, but I know a lot of folks worldwide are!
How many users (either total or monthly) have your popular Telegram bots received? How many usages?
What kind of Telegram channels or group chats do you expect to make use of the bot? What kind of questions would they ask?
Hey David, thanks for this proposal -- I loved the in-depth explainer, and the fact experiment setup allows us to learn about the results of long-term predictions but on a very short timeframe.
Am I correct in understanding that you're already running this exact experiment, just with non-superforecasters instead of superforecasters? If so, what was the reasoning for starting with them over superforecasters in the first place?
How easily do you expect to be able to recruit 30 superforecasters to participate? If you end up running this experiment with less (either due to funding or recruiting constraints), how valid would the results be?
Hey William, I'm always excited to see cool uses of the Manifold API -- and Kelly bet sizing is an idea we've kicked around before. Awesome to see that it's a project you already have in progress! As you might know, Manifold is open source (we just added a limit order depth chart courtesy of Roman) and we're open to new contributions; though probably to start, a standalone app is a better way of testing out the user interface. And feel free to hop in our #dev-and-api channel on Discord with questions~
Some questions for you:
What tech stack are you building this in?
One concern I've always had with Kelly is that it doesn't seem to incorporate degree of certainty, making it seem hard to use in real contexts -- e.g. if two equally liquid markets are both at 20% and I think they should both be 50%, Kelly recommends the same course of action even if one is "Will this coin come up heads" and the other is "Will the president be republican in 2025". Does this seem true/like an issue to you?
Hi Damien, it's cool that you've already been putting significant time into writing up and publishing these posts already; I've just subscribed to your substack! You should consider cross-posting your articles to the EA Forum for increased visibility ;)
A couple questions that might help investors thinking about investing:
What kind of feedback have you gotten on your blog posts so far?
Where do you see your blog adding value, compared to other sources of info on GCRs?
Hi Hugo, I really appreciate that you're trying to bring forecasting to a wider audience via translations (I used to scanlate manga from Japanese to English, haha). A couple questions for you:
Can you give a few examples of forecasting content that you'd intend on translating into Portuguese, and an estimate of how many such pieces you would translate using your funding?
How would you plan on driving traffic or interest to your new website?
Hi Sheikh! This seems like a neat project - it's awesome to hear that Nuno is involved here too. A couple questions that might help investors evaluating this:
What are the deliverables if experimentation goes well -- eg published paper? Blog post? Interactive website?
Roughly how much time do you and Nuno expect to put into this before deciding whether to scale up?
For the record, capturing a discussion on Discord: This proposal was submitted late to the ACX Minigrants round, and normally would not be included in the round.
That said, in light of 1) the topicality of the proposal, 2) Ezra's past track record, and 3) desire to be impartial in supporting competitors to Manifold, I'm leaning towards allowing this proposal to receive angel and retro funding.
Let me know if there are any objections!