ActiveGrant
$501raised
$20,000funding goal

Project summary

Create or adapt an existing platform to

  1. pool individual-users' charity evaluations,

  2. provide aggregated information on these evaluations to would-be donors

  3. encourage would-be donors to give financial support to evaluations they've found helpful

What are this project's goals? How will you achieve them?

The EA Community Choice competition has been a great experience, engaging many of us with numerous microprojects we'd never have otherwise have heard of. It's inspired me to think of a new approach to to charity evaluation, which to my knowledge the EA movement hasn't explored: pooled and publicly funded public individual evaluations.

The idea will be some combination of the following:

  • Create a website, or repurpose an existing service, such as GiveWiki, Patreon, Manifund, Google Worksheets, and/or an instance of Forum Magnum (the EA forum/Less Wrong codebase), to collect and display individual evaluations of any charities they choose to evaluate.

  • Structure as much of the evaluation process in standardised formats as possible (for example, rating out of 10, ranking, estimated room for more funding, text fields for track record, $ per output, estimated value of outputs, hours spent on evaluation, conditional yes/no recommendation given interest in some particular field, disclosures of pre-existing relationships with evaluatees (that is, the people they're evaluating), other metainformation about the evaluator, whether they're evaluations of an existing charity or of a potential area for a charity to be created into, and many more)

  • Develop an API that will allow users to run queries on multiple individual evaluations in the database (to get e.g. mean rating, mean rating weighted by number of evaluations, number of ratings above x etc). This API could be a part of GiveWiki, of Manifund, or a standalone service. If standalone have a very basic frontend that tabulates the data (maybe using a service like Streamlit to present it).

  • Give a simple pipeline for would-be funders to give some amount of their money to express appreciation for helpful evaluations.

  • Tentatively: give a simple-pipeline for charities to pay evaluators to assess them (if so, this payment would also need to be a field of public note)

This would address various problems with the current evaluation landscape:

  • Bandwidth limitation preventing large funders from effectively supporting small projects

  • Specialisation in the evaluatee's focus area often being high value, and too few professional grantmakers to have sufficient specialisation

  • Physical proximity to the evaluatee often being high value (being able to go into their office, for example)

  • Lack of good feedback mechanisms or accountability for existing evaluators

  • 'Democratic' concerns around most current evaluation being ultimately funded by a couple of very wealthy individuals

  • Allow pooling 'microevaluations', where different individuals may feel like they have an important insight on a charity or could generate some particular insight with a small amount of work, but don't want to commit to a full evaluation of other aspects of the charity

Ideally the project would be self-sustaining, through a further option to donate to its maintenance/development if it needed bespoke software, or through the community's efforts if it only needed (for example) a shared spreadsheet. Conceivably it could become a social benefit corporation, if many charities are willing to pay to be evaluated, and if that seems like a good direction to go in.

How will this funding be used?

The project has a number of stages, each of which could be used to rule out whether it's worth proceeding. The funding would be used to support me some part of the way through the following steps:

  1. Gauge level of support for this type of project. How much does the community feel like this would address a need that isn't being sufficiently met? I'm hoping the Manifund response itself will give some indication of this.

  2. Choose an existing platform, or resolve to create one. If need be, build an ultrasimple prototype - for example, a seeded database with a limited API.

  3. Reach out to various people who've already posted evaluations or work that could be adapted into an evaluation on the forum for permission to include their work in the evaluations database.

  4. Present the idea to a further subset of the EA community for input, feature requests, etc.

  5. Ideally find some non-neglible seed funding available to contributors to incentivise use of the platform.

  6. Present an MVP (minimum viable product) to the whole EA community.

  7. If it gains tractions, iterate and improve.

I imagine steps 1-5 would take approx 4-6 months, which is what I'm seeking funding to cover - though if I got substantial encouraging signals from early funders and possible early adopters, I'd be willing to stretch that budget out longer to bridge the gap long enough to get funding to cover more substantial development.

So the level of funding will take me some If I start a step, I'll commit to at least finishing that step when funding runs out.

Who is on your team? What's your track record on similar projects?

My career has taken me to many places, all of which have some relevance to this project:

I worked as a full stack web developer for various companies including Founders Pledge. My personal site is here.

More recently, I've been working on an LTFF project to allow evaluations of extinction risk, global catastrophic risk and 'shortermist' work within the same framework. Explanatory sequence. Simple interactive calculator. Code for full calculator.

I was also a Trustee of CEEALAR, aka the EA hotel for 5 years (and remain an advisor), where I assessed hundreds of potential beneficiaries of the project.

I've also been engaged with independent community building in the EA community for many years - I cofounded the forum version of Felicifia, a proto-EA utilitarianism forum, and more recently started the EA Gather Town.

What are the most likely causes and outcomes if this project fails?

The most likely causes are:

  • Not enough engagement from evaluators, especially those who've written evaluation-like work already. If fewer than half of them show any interest, that would be an orange flag.

  • Not enough theoretical buy-in from would-be donors - people are content with the current evaluation landscape, or don't think this would improve it.

  • Not enough practical buy-in from would be donors: either too few people both have money and the motivation to use the platform, or the awareness is not high enough.

  • In particular, I'm aware that it's hard to get sustained awareness for projects within the EA community. I run a semi-regular series of intra-EA advertising posts on the forum because of this difficulty. I think this is substantially the highest risk of failure, and, uncomfortably, also the hardest to get reliable early feedback on.

If I sunk serious effort into the project before finding it to be unworkable, I would write a post-mortem for the forum, explaining what assumptions were wrong, what mistakes I'd made in the process, and any other generalisable lessons I'd learned.

What other funding are you or your project getting?

None. I have only just consolidated this idea, though I've been thinking for a long time about problems with incentives and feedback mechanisms within the EA world.

I might apply for funding from the Infrastructure Fund, but this would be competing with them - and is implicitly critical of their work.

ampdot avatar

ampdot

3 months ago

Are you familiar with GiveWiki? Do you know how you will achieve something better than them?

Arepo avatar

Sasha Cooper

3 months ago

@ampdot I must admit I wasn't! Looking through the project, it does feel closer to what I'm imagining than any of the other alternatives I mentioned in the OP - and it looks like a great initiative - but there are some important ways in which it's not doing what I have in mind, or at least not yet:

  • It crowdsources information about donors rather than about evaluations. So you get a score, but there isn't by default any explanatory text behind it (it has 'reasons' and 'endorsements' at the bottom of the evaluatee's page, but they're not required, and none of the top three organisations have any info there).

  • There's no API that I can see (meaning the site can't easily be used algorithmically, or for third party data analysis). Relatedly there's not much publicly given metadata - I don't think you can see who are the people behind the support score, or with what weight they've contributed to it, for eg. An API allows an effectively unlimited amount of metadata presentation, whereas a humans-only UI needs to restrict itself to whatever seem to be the most important few fields of data, both to reduce clutter and because of the need to anticipate and program them in in advance.

  • There doesn't seem to be any mechanism or pathway towards rewarding people for their evaluations (though they say on their blog they've 'suspended evaluations', so maybe some functionality like this is waiting to be reintroduced).

  • What I have in mind is an aggregation of data which is already being generated. For example, with the authors' permission, I (or another user) could take Joel's evaluation of Giving What We Can or Evan's evaluation of the Long Term Future Fund and copy the salient details across, ensuring some minimal amount of content, and so avoiding the chicken and egg problem of two-sided markets. (I'm unclear to what extent this is a problem for GiveWiki, since I'm unclear what the input to their support score is)

  • As far as I can see, their data only updates if someone makes their donation via the platform. If so this means that if e.g. a big donor makes a single considered donation elsewhere and later remembers GiveWiki, they've missed the chance to contribute to the site's ranking.

  • I can't find the GiveWiki codebase, so I'm guessing it's either closed-source or using prebuilt Wiki software. They presumably had good reason for this, but I would prefer to make anything like this fully open source, so that people can easily see how the values given on each project are generated.

So as GiveWiki is at the moment I think the projects would be complementary, and I would still like to go ahead with mine. But as with Manifund (see my reply to Tony Gao below), I could imagine it being easier to extend the existing GiveWiki functionality to include the features I have in mind than coding something from scratch .

I've reached out to Dawn Drescher (the main person behind GiveWiki) for her thoughts, both on the comparison between {my project and where GiveWiki is now} and between {my project and where she wants to take GiveWiki}. If she replies privately I'll share whatever I'm able to along with my own updates here.

Arepo avatar

Sasha Cooper

3 months ago

@ampdot Update: I've had a text conversation with Dawn. We're planning to have a call in person, but Dawn's not available until the end of next week. The upshot so far:

  • Our ideas do have a lot of overlap, though I think I was largely right in my sense of the differences, and it does seem like in GiveWiki's current form they'd be complementary.

    • Their current structure as I understand it has donations as implicit 'recommendations', but where evaluators 'reward' donors who pick causes they've retrospectively evaluated highly by increasing the recommendation weight of their donations.

  • They are not currently putting a lot of development into their project, since they were finding not enough money was going via the Wiki to merit it . So this downgrades my expectation for how much interest I would expect to find in my project. I do think my project is more resilient to this problem, though, partly by what I suggested previously about the value of existing sources of information reducing the two-sided market, partly because their algorithm seems to downweight older donations (I'm not sure about this, but it seems like their max support scores used to be much higher).

  • I was wrong about it being closed source. Their code is here.

  • They're potentially open to using some of their residual funding for me building the features into their app, which I'm potentially open to doing. We'll talk about this in the call. At the moment, my updated instinct is if it reaches the coding phase, to build my idea into an independent API which might be primarily but not exclusively used by GiveWiki. I'll update the proposal to represent this.

donated $50
🥦

Tony Gao

3 months ago

such as Patreon, Manifund, Google Worksheets, and/or an instance of Forum Magnum

None of them seem like they are capable of doing what you are describing. Can you elaborate on how any of them would be good for this use case?

Arepo avatar

Sasha Cooper

3 months ago

@TonyGao I'm not super familiar with the full functionality of Patreon, Manifund and Forum Magnum, so it's more 'these do something relevant, so I can't rule them out' than that I expect any of them to be sufficient. Each has an element of what I'm describing:

  • Google Worksheets offers an easily shareable and/or copyable database with easy access for any user to use relatively simple functions to process the data. I suspect I would use it by default in the process of gathering existing evaluations, and share the worksheet I used with the community.

  • Patreon makes it relatively easy for individuals to receive payments from supporters, and I can imagine the most practical way of getting money to them being 'have your Patreon link be a field in your evaluation or user profile'. An alternative would be integrated payments with something like Stripe, but I don't know if it would be possible to have a generic payment service have a low-friction way to send money to your choice of multiple individuals. Assuming it did, I can still imagine a Patreon link being the primary option in an MVP.

  • Manifund regranting seems like it could be set up with the emphasis shifted to match what I'm describing. E.g. a regranter could set up their profile to just be a lot of evaluations, or a series of links to evaluations, and people could donate money to them earmarked for a specific cause. I don't think there's much of a way to pool data on this site, but it's possible if the Manifund team got behind this idea they could add much of what I'm describing as features to their site, which could be a lot less hassle than building everything from scratch.

  • Forum Magnum is, for better or worse, among the most feature-heavy forum software I know, with a lot of functionality that could be adapted. E.g. its 'feed' could be restricted to posts tagged 'evaluation', its pinned posts could be for a single post linking to (or, if possible implementing) and its tagging system in general could be a good human-friendly way of sorting into categories we care about. Plus its karma system could be a secondary signal of the value of someone's work that might be easier to keep track of than how much money they'd been offered. And the ability to discuss published evaluations also seems very valuable welcome. Though fwiw I feel like Forum Magnum contributes fewer of the core ideas than any of the other services above, so I would guess would be least likely to find its way into or on the path to an MVP.

Conceivably the final product could be a wrapper around some subset of these services with out needing to add extra functions. An API seems like the obvious exception, unless a) Manifund would be willing to implement that, or b) some market research shows that that's not a function a significant proportion of users are interested in.

Marine-Lercier avatar

Marine Lercier

3 months ago

That sounds like a good and welcome project! I agree with the problems identified.

Arepo avatar

Sasha Cooper

3 months ago

@Marine-Lercier Thanks for the confidence vote :)