joel_bkr avatar
Joel Becker

@joel_bkr

regrantor

Quantitatively evaluating AI capabilities and risks.

https://joel-becker.com/

Donate

This is a donation to this user's regranting budget, which is not withdrawable.

Sign in to donate
$200total balance
$0charity balance
$200cash balance

$0 in pending offers

About Me

My background is in economics and genomics research, large events, kicking off nascent biosecurity/civilizational resilience organizations and, more recently, supporting technical AI safety projects.

I plan on using my regranting role to optimize for "good AI/bio funding ecosystem" and not "perceived ROI of regrants I make personally." I think that this means trying to:

  • Be really cooperative behind the scenes. (E.g. sharing information and strategies with other regrantors, proactively helping Manifund founders with strategy.)

  • Post questions about/evaluations of grants publicly.

  • Work quickly.

  • Pursue grants that might otherwise fall through the gaps. (E.g. because they're too small, or politically challenging for other funders, or from somewhat unknown grantees, or from grantees who are unaware that they should ask for funding.)

  • Not get too excited about grants where (1) evaluation would benefit strongly from a project-first investment thesis (e.g. supporting AI safety agenda X vs. Y) or (2) the ideas are obvious enough that (to the extent that the ideas are good) I strongly expect others to fund them (e.g. career transition grants to IMO medalists).

  • Occasionally make small regrants as credible signals of interest rather than endorsements. (To improve speed, information, and funder-chicken dynamics.)

  • Encourage criticism of my thought processes and decisions from the Manifund community.

To the extent that I have an edge as a regrantor, I think it comes from having an unusually large professional network. This, plus not having serious expertise in any particular area, makes me excited to invest in "people not projects."

Projects

Outgoing donations

Comments

joel_bkr avatar

Joel Becker

about 1 year ago

I've made a $1.5k offer to this project; @RenanAraujo will do the same. (@Austin could you possibly raise the maximum funding limit, just in case others are interested in going beyond the $3k that we agreed with Alexa?)

Main points in favor of this grant

Renan and I put out a call to an invite-only scholarship program, the "Aurora Scholarship," to 9 individuals recommended by a source we trust. We were aiming to support people who are nationals of or have lived in China with a $2,400-$4,800 scholarship for a research project in a topic related to technical AI safety or AI governance. The project should last for approximately 4-8 weeks (i.e. we aim to offer $600/week at 20h/week).

Our hope is that scholars might use the experience and signaling value of these projects to counterfactually advance through to the next stage in their chosen career pipeline (e.g., PhD acceptance, think-tank placement), and that this program will strengthen the Chinese AI safety community. The program is loosely inspired by this CAIS program (but note we're not affiliated with CAIS and this does not mean CAIS endorses this program), especially in the sense that it is a requirement to join the program that each scholar has to seek their own supervisor.

Alexa was one of our excellent applicants.

Matt Sheehan is a well-respected China analyst who obviously believes in this project. I'm looking forward to seeing how Alexa gets on, and am excited for her to benefit from mentor feedback and association.

Donor's main reservations

I feel ill-equipped to judge the object-level benefits (and therefore downsides) of AI governance projects related to China, which worries me a little. But it seems unlikely that a small project driven by a promising undergraduate (and Concordia affiliate) and mentored by a respected, mainstream, US-based China analyst could result in serious downside.

Targeting the Aurora Scholarship invitees in the way we have has greater possible downside. We think this is fairly low, and have taken steps to lower this further (e.g. by not including junior researchers currently located in China).

Process for deciding amount

In her application, Alexa suggested to us that this project would take "median 15h/week" for "4-6 weeks." I took the maximum of rounding up one of these numbers (for possible expenses/planning fallacy). Max($30 x 20 x 5, $30 x 15 x 6) = $3,000.

Conflicts of interest

None.

joel_bkr avatar

Joel Becker

about 1 year ago

I've made a $1.75k offer to this project. @RenanAraujo and I asked Zhonghao to put this project up, having agreed to fund him for our "Aurora Scholarship" program. (This was supposed to be $4.8k shared between us, but @Austin pipped us to the first $1.2k! I would advocate raising the maximum grant amount if possible @Austin; Zhonghao noted that he would be working on this project for more hours than we had pre-committed to fund, and @NeelNanda comment strengthens the case for potentially funding the remaining hours.)

Main points in favor of this grant

Renan and I put out a call to an invite-only scholarship program, the "Aurora Scholarship," to 9 individuals recommended by a source we trust. We were aiming to support people who are nationals of or have lived in China with a $2,400-$4,800 scholarship for a research project in a topic related to technical AI safety or AI governance. The project should last for approximately 4-8 weeks (i.e. we aim to offer $600/week at 20h/week).

Our hope is that scholars might use the experience and signaling value of these projects to counterfactually advance through to the next stage in their chosen career pipeline (e.g., PhD acceptance, think-tank placement), and that this program will strengthen the Chinese AI safety community. The program is loosely inspired by this CAIS program (but note we're not affiliated with CAIS and this does not mean CAIS endorses this program), especially in the sense that it is a requirement to join the program that each scholar has to seek their own supervisor.

Zhonghao was one of our excellent applicants.

Donor's main reservations

I think the project itself is very low downside.

Targeting the Aurora Scholarship invitees in the way we have has greater possible downside. We think this is fairly low, and have taken steps to lower this further (e.g. by not including junior researchers currently located in China).

Process for deciding amount

As above.

Conflicts of interest

None.

joel_bkr avatar

Joel Becker

about 1 year ago

@GavrielK Heads up that I think this has happened now.

joel_bkr avatar

Joel Becker

about 1 year ago

I've offered ChinaTalk an extra $5k. I have become more bearish since writing the below, but I still directionally stand by it.

Jordan's recent conversations with Jason Matheny and Dylan Patel are marvelous. He continues to dive deeper into AI, and he continues to have difficulty funding his important niche for understandable but frustrating ~political reasons. I am really, really excited to continue to support ChinaTalk.

(Written hastily; happy to expand.)

joel_bkr avatar

Joel Becker

about 1 year ago

I've made a $5k offer to this project. I might end up offering more later. Below written hastily.

Main points in favor of this grant

  1. I agree that emergency response teams are promising in the abstract, broadly for reasons given in the linked post.

  2. Nuno's plan seems reasonable. I think it has Gavin's backing.

  3. As Nuno has written elsewhere, AI feels like "a domain in which we are likely to not have the correct hypothesis in our prior set of hypotheses." I feel more excited about "a bet on variance" in this setting.

  4. ALERT needs someone to take responsibility for it, and Nuno is standing in front of us willing to take responsibility.

Donor's main reservations

  1. I have thought ~0 about who might be the right person to lead this project and don't feel pulled towards taking this more seriously. Meanwhile, my rough and probably-somewhat-misremembered understanding is that Open Philanthropy have been hesitant to throw their weight behind ALERT-style projects because they want the person running it to clear a very high bar. Something about this pattern-matches nicely to my take that the OP cluster is too distrusting of people outside the cluster, which makes me feel more comfortable ignoring the importance of careful selection of the person leading ALERT. But neither my understanding nor my take is well thought-through. So the chance I'm mistaken in thinking that I shouldn't take this concern seriously seems decently high. (And perhaps if I did put thought to the concern, I'd discover that I think Nuno is not the right person to lead this effort. Right now my position is that I haven't even thought about what that would mean, let alone evaluated Nuno in particular.)

  2. "Improve alarm raising so that it catches more emergencies in time, and throws up fewer false positives." This step feels potentially hard to me. At the very least I think it should read "and/or throws up fewer false positives."

  3. Something I really, really like about Dendritic is that it plans to put monitoring, alarm-raising, and emergency response under one roof (but for bio only). I like this because it follows a model that I gather has already been successful elsewhere (Security Operations Centers). It makes sense to me that this model would be less likely to come up against problems like "we've raised the alarm but no-one is responding." I am seriously concerned that ALERT might run into problems like this.

Process for deciding amount

I would prefer to offer a larger amount, but my Manifund budget is pretty constrained. Any amount I offer to this project will be based on some vague notion of 'fair share' between the remaining projects I want to fund before Dec 31.

Conflicts of interest

First, Nuno is my friend. Second, one of my projects (Dendritic) is taking up a part of the ALERT mantle in a different way (monitoring + emergency response for bio in particular). There's no immediate conflict with Dendritic, but it doesn't seem implausible to me that there could be some Nuno-Dendritic collaboration in future given the overlap.

joel_bkr avatar

Joel Becker

about 1 year ago

I'm not going to look into this right now, because (my skim of) your project pattern-matches to things that I think other regrantors would fund in my absence. Please feel free to get in touch if you haven't received an offer from someone else in 2-4 weeks time.

joel_bkr avatar

Joel Becker

about 1 year ago

Thank you for taking the time to review, @Austin!

I'd be excited to talk about Manifund becoming a client. Do you have a project in mind?

joel_bkr avatar

Joel Becker

about 1 year ago

Thank you for the comment @GavrielK!

On one of your thoughts:

I've personally received conflicting opinions from people in the field about how much policymakers care about/pay attention to in-depth analyses that they didn't actually commission. I like that this analysis format will result in a solid headline number that can be communicated simply.

I think the skepticism from people in the field is warranted. And I agree it's attractive that this project will result in a headline number -- this is definitely what Vivian and Richard have in mind. But I think there's a related risk of the project here: because the (at least MVP) analysis will be less in-depth, the headline number will be easier to undermine.

On your questions:

I'm sort of surprised that this needed to be pursued through independent funding [...] What's the missing expertise that Qally's brings to the table, and why isn't CHS backing this piece of the project?

I think the missing expertise is: having the time to do the work! My understanding is that the project hasn't moved for a few months for this reason. That said, since posting this project on Manifund, I'm less confident that Vivian would not contribute (further than they already have) to the nitty gritty of the long COVID estimates; I guess this is mostly upside from your perspective.

As for why CHS isn't backing this project, I think there's a mix of (1) limited JHCHS staff capacity, and (2) part of the original motivation of this project being Vivian receiving mentorship from Richard. (I am happy to explore the possibility of funding from JHCHS at some point.)

The think-tank-friendliness of it reduces the failure risk from the team not having time to write up results--possibly could rope in the dedicated policy advocates at IfP or Rethink for advocacy-style writing?

Could you possibly expand on what roping in dedicated policy advocates would involve? Asking them to produce a document that reads like a policy report, given our less friendly write-up? And perhaps asking them to share this document with contacts in government and media?

Are you also considering long-term costs of other viruses? Just curious--I know this is much more speculative, but my impression is that COVID just had the most attention on it for a while and now there's increasing scientific interest in possible long-term effects of flu. That would also emphasize the importance of endemic disease. But could be way too complicated, just wondering if it was discussed!

Aron and I are not, at least not right now. But Vivian and Richard have so far considered: common cold, influenza, and TB.

You reference researching other contributions to disease burden as a possible extension of this project--what contributions would those be, if the bulk of the project has already been completed?

The other components that Vivian has marked as needing to be revisited are: mainline estimates of direct medical costs of common cold and of influenza, lost productivity from influenza, and learning loss due to common cold and influenza. They already have low and high estimates for each of these, but I think hasn't thought about which of these estimates is more appropriate (which in turn would affect their averaging for the mainline estimate).

joel_bkr avatar

Joel Becker

about 1 year ago

Thanks for the questions @NunoSempere.

Do you already have some decision-makers who could use these estimates to make different decisions?

Waiting on some emails that will help me answer this; will get back to you soon.

How valuable do you think that this project is without the long covid estimate?

I just put this question to Vivian, who says: "I think without the long covid estimate it probably just never gets published."

Who is actually doing this work? Vivian and Richard, or Joel and Aron?

Joel and Aron. (If "this work" refers to building the long COVID estimate.)

Why are you doing stuff $3.6k a time, rather than having set up some larger project with existing biosecurity grantmakers?

Very possible that I've made a silly decision somewhere, but here's how it looks from my side:

  1. Earlier this year I applied to LTFF and Lightspeed for ~general funding for Qally's. These applications were rejected.

    1. I think rejection was the right call -- even if you think higher-effort quantitative analyses are worthwhile, it's preferable for them to come with demand signals rather than subsidizing my discretionary offerings.

  2. I didn't apply to ~any other projects nor sources of funding because of my US work visa stuff.

    1. Mostly this is 'getting a US work visa attached to your own new organization requires a lot of time.'

    2. To a lesser extent this is 'the expected benefit of finding new projects is lower than usual, because I would only proceed with these projects if I was approved for the visa.'

  3. My visa petition was approved ~4 weeks ago; I have started to seriously look for projects in that time. One project that came up was this one. It currently doesn't have a source of funding, so I put up a Manifund proposal.

  4. All that said, I would, of course, like to work >$3.6k at a time. A few things on this:

    1. I hope that this application will attract more than the minimum!

    2. It's plausible that this will be a larger project. There are a bunch of other things to work on for this project, which I currently have cached as 'things Vivian or Richard will do' or 'things that aren't necessary for MVP.'

    3. I almost feel like I don't know other existing biosecurity grantmakers...! Linch has moved away from this sort of thing. Open Philanthropy staff are in the business of larger amounts of money. My uninvestigated understanding from this proposal is that Convergent Research (where Vivian has recently worked) aren't able to fund adjacent projects. So I posted on Manifund and notified @GavrielK. (I'm apparently allowed to fund myself, but don't want to.)

    4. There is another project that I might take on, which I expect would be ~full-time for many months.

joel_bkr avatar

Joel Becker

about 1 year ago

I think I won't make an offer to Brian for now. (But I think I would be keen to offer Brian emergency funding if that became relevant between Phase I and Phase II.)

My thinking on this project prior to it getting partial funding from another funder was:

  1. Brian is very smart (or at least very highly recommended by ex-colleagues), and he comes across quite strongly (here and in some private messages) as high integrity/very open.

  2. All of my attempts to find flaws in the object-level case for this grant fell flat. Still, I feel like my ability to evaluate this kind of thing on the object-level is seriously lacking; I wouldn't be so surprised if a disinterested expert reviewer could give me knockdown reasons against.

  3. I would really like some disinterested expert fairy to come along and give me a thumbs-up or thumbs-down on the object-level case. I was expecting to receive weak evidence of this type ~now (just after Panoplia received funding from another source). This probably would've been pivotal for me.

  4. There were good reasons to expect Brian to have difficulty fundraising from sources I might have expected to be amenable to this sort of project (see comments below).

  5. I'm taking up much too much of Brian's time relative to the size of my possible offer.

Now, Panoplia are set to receive $150k out of $300k total from a more expert funder. This makes me think:

  1. Panoplia has a disinterested expert endorsement! Fantastic. And congratulations!

  2. The previous bull case for the grant from my perspective -- that (Gavriel and) I have special insight on Brian being great, and that Panoplia would have unjustified difficulty getting money from other sources conditional on being worthwhile -- has kind of gone away.

  3. I'm ~unconcerned about changing marginal returns. This looks like it could go either way. But I am confused at why the funder has not provided the full $300k (given that they are in the habit of making grants significantly larger than $150k). I would be keen for Panoplia to be fully funded conditional on disinterested expert endorsement, but this is not under my control (with $17.8k balance remaining).

Overall: I now feel comfortable recommending Panoplia for significant funding; unfortunately significant funding is not under my control; one thing that is in my control is bridge funding whilst waiting for the next ~$150k, so I am happy to reconsider if this circumstance (me still having >=$10k balance and Panoplia needing bridge funding) comes about.

joel_bkr avatar

Joel Becker

over 1 year ago

Had a think and spoke with a philosopher I respect. Right now I think I do not want to fund this project, for the following reasons:

  1. The summer fellowship indeed seems helpful for later academic placements (as you say). I'm less convinced that it will lead to you doing more exciting research, which feels closer to the thing I care about.

  2. I suspect that you might get funding from non-Manifund sources, e.g. your department. (Do ANU expect junior scholars to pay thousands out of Ph.D. stipends without extra support from departments? If yes... wow!)

  3. In part from my own experience being a junior researcher in GPR-land, I start pessimistic about the chances that junior GPR researchers will end up focusing on questions that I think are important (separate from whether or not they will be successful). This makes me relatively more interested in funding people with pre-existing research track records.

  4. Weak view, largely stolen from others, that definitely isn't the pivotal consideration: I am skeptical that research on imprecise credences will be impactful. This is because I expect that the conclusions will make options appear more permissible than previously thought, which makes it less likely to be action-relevant.

I'm sure I've got some facts or interpretations wrong above -- happy to go back and forth!

joel_bkr avatar

Joel Becker

over 1 year ago

Thank you very much for the helpful detail Felipe! I'll have a think about this.

joel_bkr avatar

Joel Becker

over 1 year ago

That's helpful -- thank you David!

joel_bkr avatar

Joel Becker

over 1 year ago

Main points in favor of this grant

In my regrantor bio, I wrote that my “edge as a regrantor [...] comes from having an unusually large professional network.” This grant is primarily a bet on that network — when I asked people I trust about projects they are unusually excited about, the Good Ancestors Project received a very strong recommendation.

The case in favor seems clean:

  1. Greg Sadler is a very senior public servant who deeply understands Australian government. This means he is more likely to have considerable influence.

  2. He has already had some reasonably significant wins; above pre-funding expectations.

  3. Without additional funding, he might soon return to his previous position. 

    1. This makes it more likely that the impact of this grant is counterfactual. 

    2. I take Greg’s revealed preferences (wanting to continue with GAP if possible) to reveal that he believes that returning to his previous position would be a less impactful option.

  4. The effect on incentives from not funding senior professionals who make bold moves into more impactful work (and exceed expectations for this work) when the funding winds change seems awful.

  5. Greg has tentatively agreed that “[i]f GAP stops operating before 30 June 2024 (or some similar date) it will return this grant.”

    1. I leave the exact details to be worked out with Austin. 

    2. This was somewhat important to me. Manifund dollars will likely be necessary but insufficient for GAP to continue; I wanted to make sure that the funding wouldn’t be wasted if GAP wasn’t able to raise remaining funds.

  6. I do not sense that there is much negative selection going on here.

Donor's main reservations

My main reservation is presumably the same reservation that other grantmakers have had — the Australian government is not an unusually important actor with regards to making advanced AI safe or ending pandemics. To me, this seems like a pretty severe reservation; I worry that I am not triaging sufficiently hard.

The incentives point above weighed on me quite heavily when thinking about this grant. At one point (not now) I thought it would be the pivotal consideration. But I distrust my reasoning here for two reasons: 

  1. My regrant is a very blunt instrument against this problem, and 

  2. It is more reasonable for implicit contracts to be broken in extreme circumstances unforeseen by both parties, and the FTX disaster (with its effects on the remaining funding environment) is one such circumstance.

Lastly, I’m a little concerned that making this regrant will prevent me from making offers that I am even more excited about in the near future.

Process for deciding amount

Less of a process, more of a cloud of reasons:

  1. $10k is the smallest amount that felt respectful of Greg’s time.

  2. $10k is ~40% of my remaining budget; I have other projects that I would like to fund.

  3. Greg was happy to receive $10k even though it is significantly below his total ask (which I would not be able to cover using my regranting budget).

Conflicts of interest

Please disclose e.g. any romantic, professional, financial, housemate, or familial relationships you have with the grant recipient(s).

None.

joel_bkr avatar

Joel Becker

over 1 year ago

Why should I think your research is impactful in expectation? I couldn't find any information on your previous research on this proposal nor on your website.

If your ask is implicitly "I am seeking broad research training, not support on a particular research project" then I'd instead want you to expand on why this is likely to be helpful. (Are there experts in your preferred sub-fields at MIT? If there are not, should that make me concerned about you staying at MIT? What do you expect will be the short-term outcomes of a trip like this? Etc.)

joel_bkr avatar

Joel Becker

over 1 year ago

I read your description, watched your video, have a background in economics (including a predoctoral fellowship with Daniel Benjamin, one of the experts on welfare metrics beyond GDP, although I didn't work on this myself), and I still feel I have zero understanding about what your project is or what it does.

My guess is that this is a red flag about the project. The other possibility I would consider is that the project is valuable but not presented in a way that makes it straightforward for me to understand. If this latter possibility is the case, then I encourage you to rewrite the application for clarity about what BaseX is.

joel_bkr avatar

Joel Becker

over 1 year ago

Main points in favor of this grant

I have been trying to nudge Rob in this direction since earlier this year.

Earlier this year I was involved in a startling conversation. Rob Long had been speaking about the chances that we will see conscious AIs in the coming years. (And I had started to grok this possibility.) Now, he was talking about research collaborations he might aim for in future. Rob had empirical projects in mind that could only be done with access to frontier models. Should he bug a colleague-of-colleague to work at [top AI lab X]? Should he ask [collaborator Y at top AI lab Z] about the possibilities at his employer? Rob’s conclusion was: not right now. Rob already had his plate full with other work, the request might be annoying and, besides, Rob had already had a similar request to similar people declined-ish a couple of months ago.

This situation struck me as being preposterous. Here is one of the world’s top experts on AI consciousness, claiming a nerve-wracking chance of AI consciousness in the not-too-distant future, with fairly strong professional links at top AI labs and ~shovel-ready ideas for empirical projects, preparing a not-terribly-costly ask (give me a work desk, ~0 engineering time, and model access to work on these research questions)... and he is unsure whether he should make the ask?!

It seemed to me that the right question to ask was more like “should I try to start a research group as soon as possible?”. (Of course there are many reasons why starting a research group might be a bad idea. But, even if that were the case, Rob should at the very least be asking to work at places that would enable him to work on his empirical projects.)

I want Rob to move the needle on empirical AI consciousness projects harder and faster. In the short-term (see below), this means doing less ‘public-facing work and thinking about his professional opportunities,’ and more ‘thinking through a theory of change for AI consciousness research, spending more time on empirical research with existing collaborators (e.g. Ethan Perez), and pushing for ways he can continue this research in the near future.’

Donor's main reservations

First, I don’t think Rob needs funding in some sense. But I’m not super concerned about this. People should be paid for important work and, besides, I’m partly trying to set up a good incentive environment for future grantees.

Second, I think that I can only counterfactually change a fraction of Rob’s professional activities. Currently, his work falls under the following buckets:

  1. Co-authoring a paper with Ethan Perez,

  2. Co-authoring a paper with Jeff Sebo,

  3. Responding to media and podcast requests about his recent paper, and other writing related to that paper, and

  4. Job search stuff, applying for funding.

Bucket (1) is the sort of work that I want Rob to be doing more of: activities that directly move the needle on empirical work in his area of expertise. 

I instinctively feel less excited about bucket (2), because this paper will not involve empirical AI consciousness research. But I don’t want to impose on Rob’s pre-existing commitment to this project. Also, the issues of the paper have some overlap with writing a strategy doc. (Though this overlap should not be overstated, as academic work is optimized for different things than a strategy document). 

Bucket (3) I think Rob should be doing less of. The public-facing work mentioned above does not obviously move the needle on empirical work — and to the extent it does (e.g. indirectly via field-building or career capital), I would feel better if Rob undertook this work after having reflected more on his theory of change for AI consciousness research, rather than as a natural consequence of the release of his recent paper. And, unlike for bucket (2), giving up on some bucket (3) commitments feels low-downside — Rob is not going to be a less interesting podcast guest in 1 years time!

Bucket (4) feels like a waste of time that I want Rob to avoid.

My understanding is that buckets (3) and (4) add up to a bit less than half of Rob’s time at the moment.

Third, empirical projects in AI consciousness feels like a tricky area where I am extremely out-of-my-depth. I am strongly relying on Rob being a ‘reasonable expert who won’t make dumb decisions that make me regret this grant.’ That said, I feel very good about relying on Rob in this way.

Process for deciding amount

Time buckets (3) and (4) add up to 20 hrs/wk * 5 weeks = 100 hours time. Rounding up to 120 (for possible underestimation, and professional expenses), at $60/hour, I will provide $7,200 funding. I'm leaving up to $12,000 as a funding goal, in case anyone wants to fund the remainder of Rob's time during the 5 weeks.

Conflicts of interest

Please disclose e.g. any romantic, professional, financial, housemate, or familial relationships you have with the grant recipient(s).

I was housemates with Rob for a couple of months in early 2023, which is how I found out about this grant.

joel_bkr avatar

Joel Becker

over 1 year ago

Missed a helpful @Brian-Wang tag. (@Rachel, edit/delete tool would be great if easy!)

joel_bkr avatar

Joel Becker

over 1 year ago

Thank you so much for your helpful, clear, and thorough answers Brian!

Just a note to say that I'm still thinking about this. I'm interested in reaching out to references -- by default this would be people I know who are familiar with your work, but I'd be happy to speak with anyone you proactively suggest. (Though I should say that I am unsure how references would factor into my evaluation, since my main confusion at this point is ~"how should I feel about my ability to evaluate early science projects in general?".)

joel_bkr avatar

Joel Becker

over 1 year ago

Hi Jonas! Unfortunately, I don't feel like I'm well-positioned to evaluate your project relative to other regrantors (or funders). My comparative advantage is in having a wide professional network of people who know their stuff, not context on AI research labs.

joel_bkr avatar

Joel Becker

over 1 year ago

Context to onlookers: I first got to know Reed via a very strong recommendation from a senior EA bio figure; I think that selecting him for a previous highly-selective program I ran was one of the top-5 (out of ~200) selection decisions I made as part of the program.

I'd be interested to hear @RenanAraujo 's perspective on this grant, as the regrantor with the most relevant professional experiences.

joel_bkr avatar

Joel Becker

over 1 year ago

Oh, and I really appreciate you laying out possible outcomes segmented by percentiles. (I might steal this for my own applications as a future grantee!) I would've slightly preferred you to talk about >95th rather than >75th percentile, but no big deal.

joel_bkr avatar

Joel Becker

over 1 year ago

I'm feeling a little skeptical of your theory of change, especially:

  1. Create a larger, more diverse pool of emerging forecasters and superforecasters, which increases the quality of individual & aggregated forecasts.

Two reasons for this:

  1. It doesn't seem like having a larger pool of forecasters is an important bottleneck for use-cases I am aware of. "Regulatory approval" and "acceptance inside prestige institutions" feel like better candidates.

  2. I would guess that university forecasting clubs are a less beneficial means of creating top forecasters than "jobs listing forecasting skill as desired qualification," "excellent public examples of forecasting to emulate" (e.g. Misha's AI bio report). Not sure about cost-effectiveness, though.

I've spent extremely little time reflecting on this, so apologies if the above is confused or otherwise sloppy. Interested in your thoughts!

joel_bkr avatar

Joel Becker

over 1 year ago

@Daniel No no, thank you for engaging so kindly with negative feedback. I'm very grateful that we are able to have this back-and-forth at all.

I think we've hit the bedrock of our disagreement, so will leave things here for now.

By the way, I like the design of the website! It's pretty and intuitive!

joel_bkr avatar

Joel Becker

over 1 year ago

@Brian-Wang Thank you once more!

Here's some more reasoning-out-loud. I'm wondering whether I should infer from

[Gates] would require a higher level of preliminary evidence to fund something like this

and

[other funders] see themselves as more development-focused, rather than early-stage research-focused, which they’d rather leave to the NIH

that alternative funders have good reason to require more evidence or to be more development-focused. For example, they might know that NIH reviewers have more relevant expertise in early-stage research, which means that NIH can consistently select better projects than would these alternative funders. If something like this is true, then, since I am in a directionally similar epistemic position (unable to evaluate early-stage research proposals directly), perhaps I should also be skeptical.

From this position, the (false binary) options are:

  • Fund Panoplia. Although in some sense my default expectation should be pretty pessimistic (below Gates funding bar), the world benefits from me having some expected edge (gained from exceptionally strong Alvea references about Panoplia staff, which Gates et al. might not find highly credible but I do), from not having the project be delayed by ~8 months, and from having the project exist with certainty.

  • Don't fund Panoplia. The world benefits from whichever other project Manifund-plus-related-funders' dollars go to. Furthermore, the project still occurs with some probability (thanks to e.g. NIH R21 grant), albeit with a significant delay.

Does that sound like a reasonable description of the situation? (One obvious way it could be wrong is if I'm wrong about the reasons why alternative funders might leave projects like this for NIH, so I'm especially interested in learning about that.) If so, I'll discuss with other regrantors in this frame.

joel_bkr avatar

Joel Becker

over 1 year ago

I've made ChinaTalk an offer of $10k. This is not necessarily the last offer I will make to ChinaTalk.

In fact, I am currently very excited about ChinaTalk absorbing at least $50k from Manifund (plausibly more). I will advocate for this outcome to other regrantors. At the same time, I am very excited to hear any good reasons against funding ChinaTalk further.

I am agnostic about how this funding should be structured (see Austin's previous comment) -- I will leave Austin and Jordan to figure out what makes most sense.

Below, I explain my thinking around this grant -- my early confusions, the call that gave me some clarity on these confusions, and my current perspective.

Early confusions

When I began considering this grant, I was cautiously excited but very confused. Reasons for excitement included:

  1. I am a long-time ChinaTalk listener. I thought it was fantastic -- detailed conversations with a wide range of experts, on important topics, that are difficult to find elsewhere, with a charismatic host.

  2. China-US relations, China-tech issues, and Chinese views on and actions in AI are all very important.

    1. I can expand, but for now will treat it as obvious.

    2. One point that I haven't seen elsewhere: some paths to AI disaster run through military competition incentivizing state actors to hand over critical or obviously-potentially-dangerous systems to AIs. Having more transparency about what China might do here seems great.

  3. At least according to Jordan, "in the entire United States today, there are no more than 700 China researchers." This level of neglectedness seems almost unbelievable.

  4. In the area I can somewhat independently evaluate, AI, Jordan's understanding and views have always seemed reasonable to me.

Reasons for confusion include:

  1. Uncertainty about ChinaTalk's paths to impact. (See my earlier comment.)

    1. What are they?

    2. What is Jordan optimizing for? Policy influence, better information environment, developing expertise, mentoring China experts...?

    3. What should Jordan optimize for? In other words, how should I think about the relative impact of these paths?

    4. How might Jordan optimize more strongly for particular paths to impact?

  2. Jordan's proposal is not written in a way that I find helpful.

    1. In particular, he seems to be trying to find out which discrete project will most easily get funded. Among other things, this makes it difficult for me to understand his beliefs about what is most important/impactful.

    2. Also, it's too unfocused.

  3. Why aren't others jumping to support ChinaTalk? How is it possible that the best English-language media I can find about China has been run on $35k/year?

    1. Perhaps philanthropy doesn't support ChinaTalk for reasons that would not affect my own excitement.

      1. Grantmakers lack China expertise, leading to a vicious cycle whereby they do not fund work on China that would help them gain expertise.

      2. Grantmakers are unexcited about small media companies in general.

      3. Grantmakers will not consider grants this small.

      4. Grantmakers are too nervous to make grants in this area for political reasons.

      5. Jordan doesn't have a PhD.

      6. Despite the podcast suggesting that Jordan has a wide network, Jordan has very limited networks in philanthropy. (His proposal not being well-optimized for Manifund regrantors is some evidence for this.)

    2. Perhaps philanthropy doesn't want to support ChinaTalk for reasons that from my perspective would be persuasive if true.

      1. I am wrong that Jordan is a smart expert working on important topics with reasonable views. (I am a novice; more serious people recognize that his expertise aren't there, or his views aren't reasonable, or something else.)

      2. I am wrong to infer that others get value from the podcast from the fact that I enjoy the podcast.

    3. Perhaps ChinaTalk takes very little work.

  4. If ChinaTalk has garnered little philanthropic support, and Jordan is an expert in an important, neglected area who "meets regularly with senior US officials to discuss and advise on US-China relations and technology policy," then why isn't Jordan in an important government job?

Notice that I am largely ignoring Jordan's particular proposals regarding what he might do with the money. It doesn't feel like these would be pivotal for my decision to fund.

Call with Jordan

I spoke with Jordan about his proposal. Our chat made me feel significantly more confident in my optimism. I found out, for instance, that:

  1. I shouldn't worry so much about distinctions between ChinaTalk's paths to impact.

    1. Jordan is credibly dedicated to helping prevent really bad outcomes like World War III. Within that, he optimizes for activities that give him energy. This seems extremely reasonable to me.

    2. The paths are synergistic with one another: meeting with senior officials helps with connections, which helps with the information environment, which helps with encouraging a new generation of experts, etcetc.

  2. The reason why Jordan's proposal is not written in a way that I would find easier to read is that he's not very engaged with the EA memesphere. On balance a good thing with respect to this grant, from my perspective!

    1. Whilst poking him on this, I discovered an especially shocking fact. Luke Muehlhauser was quoted as being very enthusiastic about ChinaTalk and Jordan personally, but too busy to consider grants of this size. If I were looking for Manifund funding for ChinaTalk, such a quote would be in the first few lines of my proposal. I took the absence of this quote in his application as credible evidence in favor of "despite other talents, Jordan is bad at writing proposals for the Manifund audience" and against "Jordan is mildly manipulating me with his powers of persuasion."

  3. There are good answers to all of my concerns about ChinaTalk having received so little funding support.

    1. Philanthropists are scared to touch China, in part because of lack of expertise and in part for political reasons.

    2. Advertisers can be nervous for similar reasons.

    3. EAs in particular are scared to touch anything vaguely shaped like an AI race. But there is reason to think this might be silly in the China case. More importantly, Jordan is a force pushing for a more transparent information environment, not for racing.

    4. Jordan only recently started his funding push. Previously he was employed or looking for jobs. And it wasn't clear whether a small media business like ChinaTalk should be a philanthropy-backed entity; Jordan was hoping to support this work through subscriptions only.

    5. The private list of senior US officials is indeed impressive.

    6. Fancy and influential people in the China and tech policy space are big fans.

I left the call feeling like the grant was too good to be true. So I asked two people I really trust for their views:

  1. A China expert with traditional/legible credentials, who had (years ago) expressed mild skepticism to me about Jordan's technical expertise. They were excited about this grant.

  2. A senior researcher who was the most helpful advisor for a previous competitive selection process I ran, and whose grantmaking perspective I rate highly. They were extremely excited about this grant.

Current perspective

All in all, here's my current understanding of the situation:

  1. Jordan is working in a very important and very neglected area. ChinaTalk is directly helpful for this area in straightforward ways. Less confidently, ChinaTalk might be the best bet of any media organization in this area.

  2. Jordan is increasingly interested in AI issues. His early coverage of this has been good.

  3. Jordan is a serious China-US-tech expert; reasons to be suspicious about this have largely fallen away.

  4. The reason that Jordan needs funding (buying out his time, paying for high-value operations) is straightforward.

  5. Other grantmakers are not funding Jordan for reasons that make internal sense but that don't matter with respect to my decision. (If anything, these reasons make me more excited.)

If all of the above is true, then ChinaTalk is to my eye perhaps the best grant opportunity on Manifund to date.

Finally, note that many of my confusions and reasons for excitement might also apply to other China X AI projects. I plan to seriously explore this further, and have some early leads from Jeffrey Ding as well as from Jordan. By default, I will take low excitement about other projects in this space as a reason to be even more excited about ChinaTalk (although I am cautiously optimistic that I will be able to find exciting projects).

joel_bkr avatar

Joel Becker

over 1 year ago

(EDIT: I see that broad-acting antivirals are out of scope for AOI. Maybe the question instead should be: why does BARDA not want to invest in this area?)

joel_bkr avatar

Joel Becker

over 1 year ago

@Brian-Wang thank you again Brian :)

On your latest response:

  • What are the base rates for your preferred drug candidate working? Why (and to what extent) might you want to depart from base rates in this particular case? If the preferred drug candidate fails, is plan B to try the next-best candidate? If so, what are the chances of that succeeding? If not, what is the plan, how resource-intensive might that be, etcetc.

  • Am I correct to think that you would be excited about Manifund dollars coming in the form of an equity investment (in associated for-profit) rather than a grant to Panoplia?

  • Is there any way to get some of the benefits of being associated with an educational institution? (Less NIH reviewer hesitancy, cheaper skilled labor, etc.) Perhaps with academic advisors, joint projects, formal part-time associations, etc.

  • Why do you need to wait to approach institutional funders like Gates? In fact, why are you applying to EA funders rather than other funders more experienced with technical R&D work (and nonetheless excited about a project as exciting as broad-spectrum antivirals!)?

Some other questions that are coming up for me:

joel_bkr avatar

Joel Becker

over 1 year ago

@Brian-Wang Thank you for the reply!

The separation you've outlined between non-profit and for-profit activities is helpful. Still, I feel uncertain about the time and cost scale of activities prior to clinical development.

Do you have any pessimistic/central/optimistic guesses for the time and/or money it might take to go from your current stage to a stage at which the broad-spectrum antivirals would not need to rely on philanthropic dollars? In other words, should funders be (optimistically, not centrally) hoping that after 3-6 months you could have results positive enough for government/for-profit dollars to pour in, or is this funding more likely to cover only the first <10% of pre-clinical development activities, or something else?

Apologies for my naivety re: development timelines!

joel_bkr avatar

Joel Becker

over 1 year ago

@Daniel I agree that retail investors are unlikely to be sophisticated enough to looks at e.g. the prices of future contracts. But why should I expect them to be sophisticated enough to seek out + understand + trust a play-money prediction market?

And, my apologies, liquidity was a bit of a misnomer. I meant to invoke related issues like "number of market participants" and volume.

I guess it feels to me like:

  1. I'm not sure there's a serious problem here.

  2. Even if there is, I'm not sure prediction markets solve it in practice. (I've concentrated on this above, but the other concerns feel important too.)

  3. Even if they do, I'm not sure this would be highly socially valuable.

joel_bkr avatar

Joel Becker

over 1 year ago

I start positive about this proposal, because I have the strong impression that Brian was widely considered to be the brains behind Alvea. (The verbatim quotes would sound considerably more fawning than this.)

My main question right now is: why are you a non-profit? (And, if you would still be applying were you a for-profit: why are you applying to philanthropic funds?) I would have naively guessed that "broad-spectrum antivirals led by people with fantastic early track record" might make for a compelling pitch to the for-profit world.

joel_bkr avatar

Joel Becker

over 1 year ago

@Daniel but

  1. The "current information available" includes the factors you point to.

  2. I can get future price forecasts from the prices of futures contracts, right? (Cost of carry aside -- prediction markets would have their own issues, in particular liquidity.)

joel_bkr avatar

Joel Becker

over 1 year ago

For what it's worth, I agree with @Austin 's comment.

joel_bkr avatar

Joel Becker

over 1 year ago

I'm pretty confused about this. Aren't asset prices (that retail investors are likely to explore) already the real-time forecast of the value of assets?

joel_bkr avatar

Joel Becker

over 1 year ago

@JTorrescelis thank you for your reply!

After reading, my core worry remains: constraints on Jaime and Juan's time make me nervous about the community-building benefits of RCG, and the catastrophic risk reduction projects are not compelling enough to make up for this.

I could imagine being excited about the community-building benefits even without Jaime and Juan putting lots of time into mentoring. This would probably look like hearing about collaborations beyond Epoch and ALLFED, and/or signs that other relatively senior staff were going to devote significant time to mentoring.

I am not sure what evidence would convince me that the catastrophic risk reduction benefits are competitive with other proposals. One necessary thing would be increasing the concreteness of your 95th percentile outcomes sketch. But, even then, I'm not sure that "significantly improving biological surveillance in Guatemala" would be a compelling enough result. (This is part of what leads to me emphasize community-building benefits.)

So, though I continue to be open to hearing more evidence and/or criticism of my view, I think I will not fund this project right now.

To be clear, I think this is really sad. I continue to be excited about:

  • Funding projects that other grantmakers do not have the time to get enough context on -- Spanish-speaking catastrophic risk work where I know some key organizers seems like a great example of this.

  • Spain/CDMX/LatAm as a non-US/UK hub for high-impact projects.

  • Any project that receives a large personal donation from Nuno Sempere.

joel_bkr avatar

Joel Becker

over 1 year ago

@Holly_Elmore Thanks for picking this up; I totally agree that you're not reflecting some "LessWrong perspective." My previous wording was clumsy. My thinking might be too. I'll have another try at pointing at this cluster of concerns.

  • I'm having a conversation with a senior US policymaker or journalist (maybe not focused on AI) 1 year from now. I say to them: "Holly did X work that attempted to encourage Y person/organization to do Z." Then they say: "who is Y?" Unfortunately, actor Y is not very influential.

    • An organizer with more experience in politics/media or without the shared presentation/ideas might have picked up on this and better targeted their efforts.

    • Perhaps Holly has targeted Y for bad taste reasons (Y is more sexy, or better-known to LW types) or for good efficiency reasons (Holly has second-hand connections at Y, perhaps borrowed from shared-memesphere-connections, which made it easier to engage with Y than an actor with similar influence and lesser connections).

  • A senior journalist engages with Holly 3 months from now. Associating the idea of a moratorium, or even Holly in particular, as coming from a particular memesphere, the journalist takes her less seriously, or seeks to contrast her more strongly with other voices, or comes to see her view as representative of the memesphere in ways that are unhelpful for others' lobbying, or [other bad outcome].

    • After writing, I'm not sure this concern makes sense. The reasoning seems pretty tortured, and the outcomes are not that bad anyway.

    • The better version of this might look more like my Conjecture comment in the next paragraph: people who Holly engages with might persistently take AIS concerns less seriously if pushing a message that they think is crazy.

Regarding Conjecture, I have the sense (I think second-hand from public Matt Clifford communications and a bunch of conversations with [redacted prominent AI safety researcher]) that policymakers get turned off very quickly when they hear the message that everyone's going to die. My big concern is that this turning off persists over time.

I agree that "the question of whether to regulate/slow/stop AI transcends many of the technical details" on the substance. I guess I have a sense that, at least pre-Hinton leaving, AI safety worries felt to many people like they came from amateurs who had misunderstood the nature of the technology, which is part of what turned them off from these concerns. Maybe I think this has an unhelpful interaction with arguing for a window-shifting policy.

After writing out the above, I think I see that my concerns -- even if they do make sense for conversations with policymakers and media elites -- might not port over to political organizing. I could probably get clearer on this if I had a clearer sense of the political organizing activities you might pursue. Could you describe these in more detail?

joel_bkr avatar

Joel Becker

over 1 year ago

@GavrielK (Note to others: I have a major COI with Miti.)

I think your write-up is very clear. But I am somewhat surprised that you have ~nothing negative to say, which makes me think that it could be more useful if you said more (true, salient, and) negative things.

Even if you are totally positive about the project, could you say something about how other bio opportunities you've looked at feel below this bar? I am imagining things like "cost too much relative to Manifund pool," "seem better-suited to larger funders with more context on their project," and other stuff that I think you imply in the write-up but don't say outright.

joel_bkr avatar

Joel Becker

over 1 year ago

I've made Marcel an offer of $2.5k. This is not necessarily the last offer I make to this project; I'm giving a smaller amount in order to get money out the door more quickly and to provide a credible signal to other regrantors.

This grant seems fairly straightforward to evaluate. On the one hand,

  • The early project looks great. I am and others are deriving value from it already.

  • The way in which this project would contribute to "raising the sanity waterline" is clear (at least, on its own terms; see my distrust of this kind of thing in general below).

  • Trusted members of my network are excited about the project.

  • Marcel's answer to my question about scaled-down budget seems sensible. It feels like this project can absorb only partial funding fairly well.

On the other hand,

  • I'm somewhat distrustful of forecasting/IIDM projects by default, because the connection between the 95th percentile version of these projects and improved outcomes has often been unclear to me. (I like Linch's post as a counterweight.) I feel this for Marcel's project too.

  • I am more excited about some other projects, do definitely don't want to go all-in on this.

Overall, I'm happy to give partial funding to this project, on the substance and as a signal to others.

joel_bkr avatar

Joel Becker

over 1 year ago

@JordanSchneider thanks Jordan.

Those all sound like potentially cool activities. One big question I have is: what are the ChinaTalk theories of change, and should I expect some of these to be significantly more impactful than others? (I am asking for information, not asking you to change. I don't want to ruin the ChinaTalk magic by suggesting a change in focus!)

Here are some goals you might have in mind:

  • Influence policymakers to take specific actions (perhaps by influencing their staffers, people influencing their staffers, etc.).

  • Promote a better information environment (with more cooperative US-China vibes, or with safety-promoting mutual understanding, or something else).

  • Develop your own expertise so that you can [do something in future].

  • Mentor a new generation of China/China+tech specialists.

(I'll note that three of your five suggestions above sound to me like "mentor a new generation", not "enrich public discourse." I feel a little confused about this -- the impression I get from listening to the show is that you're especially concerned about the former, but your application is focused on the latter.)

With the possible goals in mind, here are some more focused questions that it would be great to get your take on:

  • Can you say more about your "150 calls with students and early career professionals"? How do you think about the impact of these calls?

  • What other mentorship activities have you done to date? What is the track record of these activities?

  • How do you think that your past or future impact via mentorship activities compares with discourse-enriching activities?

  • What would it look like for ChinaTalk to optimize more strongly for one of the goals above? (Or other goals as you see them.)

  • Can you tell me about (near-miss, actual, or future) concrete policy wins, career changes, and other-good-things that ChinaTalk might be counterfactually responsible for? (Extra-low pressure for this question. I understand that you are doing something broad and shouldn't waste time tracking downstream outcomes in detail.)

joel_bkr avatar

Joel Becker

over 1 year ago

I made Holly an offer of $2.5k yesterday. This won't necessarily be the last offer I make to this project; I'm giving a smaller amount in order to get money out the door more quickly and to provide a credible signal to other regrantors.

Here's my reasoning about this grant as of the offer:


1) Overall, I feel excited about this proposal on the meta level.

  • As per my profile, I want to make offers to "grants that might otherwise fall through the gaps. (E.g. [...] politically challenging for other funders [...].)" My impression is that this proposal is a great example of that.

  • Also as per my profile, I want to "[o]ccasionally make small regrants as credible signals of interest rather than endorsements. (To improve [...] funder-chicken dynamics.)" I think that this application has some funder-chicken going on -- regrantors seem to be positive, but it's a slightly nervey proposal to give to. (Maybe that's for good reason -- there's more downside potential here -- but I don't feel like that's what's driving concerns, since I (and others) might expect any significant downside to come after the period in which Holly tests her fit.)

  • I often see Holly (online) say interesting and reasonable things that go against her crowd. (E.g. XYZ.) I'm interested in what she'll have to say about AI.

  • I have the weak (positive) impression that Harvard EA became noticeably less well-organized after Holly left, and an even weaker (positive) impression that this was due to Holly being a great organizer (rather than pandemic, or being Holly's fault for not enabling a strong successor, or something else).

  • Unfortunately, unlike for most other grants I will make, I don't think I'll be able to get helpful information from my network about the attractiveness of the grant.

  • I am uncomfortable with how in-group this grant feels. Holly and I have only briefly overlapped professionally; there's definitely no COI. But "fund an OEB PhD to advocate for AI moratorium" feels like something I'd be more dismissive of if I didn't know that Holly was so deep into our shared memesphere. In some ways this situation feels appropriate on the substance -- sharing ideas and professional connections with Holly gives me positive context. In other ways it feels like my judgement might be compromised. (See my last object-level point below.)

2) I feel confused about how I should value this proposal on the object level.

  • I feel somewhat uncomfortable with the role that Conjecture plays in AI discourse. The main thing I'm concerned about is something like "important decision-makers get turned off after hearing unpersuasive and possibly-also-misleading messaging." Some of this is substance, some of this is style. Advocating for a moratorium sounds like shared substance. This makes me nervous about Holly's advocacy.

  • On the other hand, the bull case for Conjecture's advocacy is something like "Overton window-shifting." I am skeptical of this case, but people I trust are more excited about it. I think Holly is better-placed to take advantage of this case in some ways (e.g. more relatable and careful speaker) and worse-placed in other ways (e.g. doesn't have the ML credibility that Connor has through Eleuther).

  • Related to my final meta-level concern: I have the sense that the LessWrong memesphere is becoming less and less relevant to how AI goes, and so am decreasingly excited in proposals that seem to speak to or on behalf of that memesphere.

Overall:

  • I've thought about this grant for too long without making much progress.

  • Seems like the most productive thing to do is to provide early support to Holly and a credible signal to other regrantors.

  • Holly and/or other grantmakers might be more likely to provide me with information that helps me think about whether to give more.

joel_bkr avatar

Joel Becker

over 1 year ago

Hello Jorge and the rest of the Riesgos Catastróficos Globales team!

Here are some early impressions that I have about your proposal:

  • I am pretty excited about the senior staff. I have worked a little with Juan, and found him to be smart, clear-thinking, resistant to group-think, and focused. My interactions with Jaime are more personal/less professional, but I am impressed with Epoch.

  • The projects all seem interesting and reasonable, with fairly clear theories of change. It's not difficult for me to imagine these projects leading to valuable policy changes in the targeted countries (nor is it difficult to imagine these changes spreading elsewhere).

  • That said, when I ask myself "to what degree does the 95th percentile version of these projects improve catastrophic risk outcomes?", I notice that I don't feel like the projects tackle what I understand to be the most important bottlenecks in e.g. AI and bio.

  • But perhaps I shouldn't be thinking of the projects as providing the primary path to impact. Two alternative paths to impact might go through mentoring junior staff and Spanish-speaking movement building. I feel early optimism about the first of these -- Jaime has a track-record here (Juan might too, I'm just less familiar). I feel more confused about how to think about the value of these kind of projects for movement building, and how to think about the value of successful movement building.

With the above in mind, some questions for you:

  • Which paths to impact do you see as most important? What does the 95th percentile version look like?

  • How do you currently see RCG tying in with mentorship and/or movement-building goals? (Are Jaime and Juan devoting time to mentorship, what will opportunities look like for getting involved, what's the profile of person who engages with your outputs and might be interested in contributing themselves, etc.)

  • How might RCG look if it were optimizing for mentorship and/or movement-building?

  • Any other reactions to my impressions, places you think I'm mistaken, etc.?

joel_bkr avatar

Joel Becker

over 1 year ago

Hi Johnny! Many congratulations on being approved for a grant from EV. Could I ask how that might change your ask for funding here?

joel_bkr avatar

Joel Becker

over 1 year ago

Hi Jordan -- long time listener here! Thank you for posting this.

I'm wondering what scaled-down budgets for this would look like. I am guessing: anything below $115k and you would not hire the fellow; funding between $0k and $115k would be used to buy back some proportion of your consulting time?

joel_bkr avatar

Joel Becker

over 1 year ago

Hello Marcel! I've been enjoying The Base Rate Times' early coverage -- good work!

Could I ask if you have a sense of what a scaled down budget would look like? Given the large discrepancy between what you require to proceed and your goal. What would the breakdown look like at $10k, $35k, $100k?

Thank you in advance!

Transactions

ForDateTypeAmount
Estimating annual burden of airborne disease (last mile to MVP)5 months agoproject donation+200
The Base Rate Times12 months agoproject donation2500
Support for Deep Coverage of China and AIabout 1 year agoproject donation900
Explainer and analysis of CNCERT/CC (国家互联网应急中心)about 1 year agoproject donation1500
<8aa331b7-3602-4001-9bc6-2b71b1c8ddd1>about 1 year agoprofile donation1500
<8aa331b7-3602-4001-9bc6-2b71b1c8ddd1>about 1 year agoprofile donation+1500
Support for Deep Coverage of China and AIabout 1 year agoproject donation650
Make ALERT happenabout 1 year agoproject donation950
Mapping neuroscience and mechanistic interpretability about 1 year agoproject donation1750
Support for Deep Coverage of China and AIabout 1 year agoproject donation1000
Make ALERT happenabout 1 year agoproject donation2050
Manifund Bankabout 1 year agowithdraw8000
Make ALERT happenabout 1 year agoproject donation5000
Good Ancestors Policy expensesabout 1 year agoproject donation10000
Estimating annual burden of airborne disease (last mile to MVP)about 1 year agoproject donation+3600
Estimating annual burden of airborne disease (last mile to MVP)about 1 year agoproject donation+1000
Estimating annual burden of airborne disease (last mile to MVP)about 1 year agoproject donation+3400
Support for Deep Coverage of China and AIabout 1 year agoproject donation5000
<8c5d3152-ffd8-4d0e-b447-95a31f51f9d3>about 1 year agoprofile donation+1000
Empirical research into AI consciousness and moral patienthoodabout 1 year agoproject donation7200
Support for Deep Coverage of China and AIover 1 year agoproject donation10000
Holly Elmore organizing people for a frontier AI moratoriumover 1 year agoproject donation2500
Manifund Bankover 1 year agodeposit+50000