5

Experiments to test EA / longtermist framings and branding

ActiveGrant
$26,760raised
$115,000funding goal

Project summary

There has been much debate about whether people engaged in EA and longtermism should frame their efforts and outreach in terms of ‘effective altruism’, ‘longtermism’, ‘existential risk’, ‘existential security’, ‘global priorities research’, or by only mentioning specific risks, such as AI Safety and pandemic prevention. (1)(2)(3)(4)(5)(6)(7)(8)

However, these debates have largely been based on a priori speculation about which of these approaches would be most persuasive. We propose to run experiments directly testing public responses to these different framings. This will allow us to assess how responses to these different approaches differ in different dimensions (e.g. popularity, persuasiveness, interest, dislike).

What are this project's goals and how will you achieve them?

We propose to conduct public attitudes surveys and message testing experiments, which would address these questions. We would then make reports of the results publicly available (e.g. on the EA Forum) so that the community can update based on these findings.

Our goal is that these results can inform the decisions of EA/longtermist decision-makers across the community (ranging from those at core movement building orgs, funders, movement builders and others. We see these results as potentially influencing decisions both large (should we stop promoting “effective altruism” and refocus our efforts more on alternative brands or individual causes) and small (should I frame my new program or my individual outreach more in terms of "longtermism" or just "risks from AI").

These studies would also assess how effects might differ for different groups (e.g. students, different gender, age, race). Such analyses may therefore also help the EA and longtermist communities become more representative and more diverse, by avoiding messages which are off-putting to different groups.

The proposed project would include studies such as:

  1. Experiments to understand how responses to the ‘effective altruism’ brand compare to response to alternative brandings (e.g. ‘high impact giving’, ‘global priorities’)

  2. Experiments to compare responses to ‘long-termism’, ‘existential risk’, ‘existential security’ or specific catastrophic risks.

  3. Experiments to assess the impact of effective altruism representing a broader or narrower array of cause areas (e.g. how is outreach affected by EA representing primarily AI risk vs a broader array of causes).

  4. Gathering qualitative data from respondents about their impressions or associations of the terms and assessment of whether there are any systematic misunderstandings or surprising impressions

How will this funding be used?

The funding will be used for a combination of survey costs (e.g. compensating participants in the studies and platform fees) and staff costs (to design, run, analyze and report on the surveys).

The exact costs of each survey depend on the sample size needed to have adequate statistical power for a given design (influenced by factors including but not limited to: how many messages we are testing, whether we are weighting the sample to be representative of a given population, how many messages each participant receives), and how long the survey instrument is.

The exact number of studies and designs which we can employ will depend on the amount of funds raised. With maximum requested funds raised we will run multiple different studies using different designs to increase robustness. This will also allow us to run larger studies that allow us to identify small differences between messages or sub-populations with greater precision. With the minimum requested funding raised we will run and publicly report on one study (including necessary pilot studies) to provide a proof of concept to the community to help allow people assess the utility of this approach.

Who is on your team and what's your track record on similar projects?

Rethink Priorities’ Survey and Data Analysis Team is composed of Senior Behavioral Scientists, Willem Sleegers (LinkedIn, Scholar, Website), who is also a Research Affiliate at Tilburg University, and Jamie Elsey (LinkedIn, Scholar, Website), and managed by Principal Research Director, David Moss, (LinkedIn, Scholar), who each worked on multiple academic projects prior to joining Rethink Priorities. 

Rethink Priorities’ Surveys and Data Analysis Team has an extensive track record of conducting high quality projects targeted at the interests of actors in the EA and longtermist space. Since hiring Jamie and Willem less than 2 years ago, we have completed over 40 substantial projects, plus over 50 smaller consultations, including multiple commissions for Open Philanthropy, the Centre for Effective Altruism, 80,000 Hours, Forethought Foundation, Longview and others.

As most (>80%) of these projects are private commissions, we cannot share many of them, however, they have included:

What are the most likely causes and outcomes if this project fails? (premortem)

Given our strong track record as a team and organization, we place low probability on the risk of operational difficulties or a low-quality product.

One possible ‘failure’ mode is if the results are accurate and robust but practically uninteresting. For example, all the messages/framings we test might perform just as well as each other, with no significant differences between them. However, this would only be a qualified ‘failure’, since knowing that these differences in framing seem to make no difference in receptiveness to our causes would itself be useful to know, might inform decisions and could redirect EA attention away from unfruitful speculation about whether one approach is better than another.

Another practical failure mode is that the results are useful and actionable, but decision-makers do not update on them. Our core plan involves publishing the results publicly on the EA Forum, where relevant audiences are likely to encounter them. However, we will also reach out to some of the most relevant decision-makers directly regarding the results. With larger amounts of funding, we will be able to dedicate more time to describing and illustrating the results of the studies and engaging in more outreach to ensure decision-makers are aware of them and know how to make use of them. We also believe that the project being funded by a public funding mechanism like Manifund could help advertise the project and increase visibility.

One potential failure mode is if decision-makers update on the results more than is warranted. The first line of our efforts to guard against this is by being very clear in our writeup of the results what conclusions are or are not warranted by the findings, and by providing clear quantifications of the magnitude and degree of uncertainty around the effects we find, as well as working to make these quantifications accessible to decision-makers. In addition, with more funding, we can run multiple studies to replicate results with different designs and examine different audiences in more detail to ensure that results are robust.

What other funding are you or your project getting?

Our department has received no funding for this project or similar projects. We have also received no funding to provide general support for our department.

We have received many individual commissions from different decision-makers for various projects. However these are private and so the results typically cannot be published.

David_Moss avatar

David Moss

3 months ago

Progress update

Update (23/08/2024)

We have run a number of initial studies. We aim to release results from these publicly within the next month and the results have already been shared privately with stakeholders.

I set up a number of markets on Manifold to predict results from these studies.

Update

We are pleased to announce that we have received a grant from Open Philanthropy to fill the rest of our funding gap (as defined by this fundraiser) for this project. This means that we'll be able to be able to complete the project at the full level of depth and comprehensiveness described on this page.

We have already designed a large number of candidate studies and aim to report results over the summer.

David_Moss avatar

David Moss

6 months ago

Progress update

Update

We are pleased to announce that we have received a grant from Open Philanthropy to fill the rest of our funding gap (as defined by this fundraiser) for this project. This means that we'll be able to be able to complete the project at the full level of depth and comprehensiveness described on this page.

We have already designed a large number of candidate studies and aim to report results over the summer.

donated $16,650
MarcusAbramovitch avatar

Marcus Abramovitch

11 months ago

Main points in favor of this grant

  1. We just need to know this or have some idea of it (continuous work should be done here almost certainly). Hard to believe nobody has done this yet.

  2. Came strongly recommended from Peter Wildeford who I very much like. He preferred this to a different project I was also evaluating.

  3. Lots of spill on effects that will be valuable

  4. People involved seem qualified and have good track records doing this work

Donor's main reservations

  1. I don't know these people. It comes somewhat second hand due to a few recommendations from people I trust.

  2. I worry these results will be used too conclusively and not updated much. As in, people will have too much confidence in the results of this outcome

  3. Nothing could come of this but this isn't a big concern to me.

Process for deciding amount

I threw the rest of my balance into this. I didn't know exactly how to balance my Manifund budget over the few months. I didn't know how much more I would be getting if at all. This was a grant I was deciding on with a few other candidates. Once I decided this was the highest EV, I threw my entire remaining balance. Don't look into the weird number.

Conflicts of interest

Nothing to disclose.

David_Moss avatar

David Moss

11 months ago

Hi @MarcusAbramovitch, many thanks for your support and for the thoughtful comment!

(Meta note to Manifund: I only received notifications for comments but not for donations I'm not sure if that's intentional?)

donated $5,000
peterwildeford avatar

Peter Wildeford

11 months ago

I'm in charge of the team running this project and I want to put my personal money on the line to show my commitment to making it succeed.

David_Moss avatar

David Moss

11 months ago

@peterwildeford Thanks for your show of support!

donated $5,000
Austin avatar

Austin Chen

12 months ago

I would be very interested in reading the results of this survey, to better understand how to position EA and longtermism! I appreciate especially that there is an established team with a good track record planning to work on this, and that they would publish their findings openly.

I'm funding half of the required ask atm, since I feel that other regrantors or funders in the EA space would be interested in participating too. (Also, my thanks to @Adrian and @PlasmaBallin for flagging that this proposal has been underrated!)

David_Moss avatar

David Moss

12 months ago

@Austin Many thanks- greatly appreciated!

donated $100
Adrian avatar

Adrian Kelly

12 months ago

This seems underrated to me, I'm very surprised that this hasn't been funded already. Why do you think that is? Or has it been done privately and people think that it's bad optics and seen as inauthentic to be focus grouping your message?

David_Moss avatar

David Moss

12 months ago

@Adrian Thanks! We have run a number of private message testing experiments with a narrower focus (e.g. testing specific ads which an org is considering), but none which are broader/more fundamental like these proposed studies.

These broader studies seem to present something of a coordination problem, where they are of partial relevance to lots of different orgs, but (unlike the narrower studies) not solely relevant to any particular org. The end result is that no particular actor funds them.

donated $10
🐳

Plasma Ballin'

12 months ago

Seems like a good project. I think EA has a big optics problem, enough that if I were doing some big EA project, I probably wouldn't label it as "effective altruism" just to avoid turning people off who automatically react negatively to that term. I'm interested to see what brandings people see in a better light.

David_Moss avatar

David Moss

12 months ago

@PlasmaBallin Many thanks- we'd be interested to see the results too!