NunoSempere avatar
Nuño Sempere

@NunoSempere

regrantor

Researcher & forecaster

https://nunosempere.com

Donate

This is a donation to this user's regranting budget, which is not withdrawable.

Sign in to donate
$8,001total balance
$5,301charity balance
$2,700cash balance

$0 in pending offers

About Me

Cofounder and Head of Foresight at [Sentinel](https://sentinel-team.org/).


Below by notes for a previous regranting program:

I don't yet know what I will do with this money. Some threads that I am considering:

  • Grants whose appeal other funding sources can't understand.

  • Thiel-style funding: Grants to formidable people outside the EA community for doing things that they are intrinsically motivated to do and which might have a large positive effect on the world.

  • Targetted grants in the forecasting sphere, particularly around more experimentation.

  • Giving a large chunk of it to Riesgos Catastróficos Globales (https://riesgoscatastroficosglobales.com/) in particular.

  • Bets of the form "I am quite skeptical that you would do [some difficult thing], but if you do, happy for you to take my money, and otherwise I will take yours".

  • Bounties: like the above, but less adversarial, because you do get the amount if you succeed, but don't lose anything if you don't.

  • Non-AI longtermism.

  • Grants to the Spanish and German-speaking communities.

I am also considering doing things a bit different from what the current EA ecosystem currently does, just for the information value. For example:

  • Giving feedback at depth on applications that people pitch me on

    • The rationale is that this feedback could improve people's later career paths. I think that other funding orgs don't do this because they get overwhelmed with applications. But I'm not overwhelmed at the moment!

  • Putting bounties on people referring applications

  • Using Manifold prediction markets on the success of grants as a factor for evaluation

    • Requires grantees willing to be more transparent, though

That said, I'm generally seeking to do "the optimal thing", so if I get some opportunity that I think is excellent I'll take it, even if it doesn't fall into the above buckets.

Also, I guess that $50k is not that large an amount, so I'm either going to have to be fairly strategic, or get more money :)

As for myself personally, I'm maybe best known for starting Samotsvety Forecasting (samotsvety.org/)—an excellent forecasting team, for being a prolific EA Forum poster (https://forum.effectivealtruism.org/users/nunosempere?sortedBy=top)—since emigrated to nunosempere.com/blog, or for having done some work at the Quantified Uncertainty Research Institute on topics of estimation and evaluation.

Projects

Outgoing donations

Comments

NunoSempere avatar

Nuño Sempere

4 months ago

This continues to run, you can keep track at https://blog.sentinel-team.org. We also have a weekly private project update mailing list to which I'm happy to add people with whom I have some rapport.

One important development has been getting a cofounder, Rai Sur (rai.dev) to complement my skillset; he's been working out great.

NunoSempere avatar

Nuño Sempere

4 months ago

I like the varied and different tools. I'm a bit worried about the minimum funding bar not being hit.

NunoSempere avatar

Nuño Sempere

4 months ago

Congrats on getting to alpha

NunoSempere avatar

Nuño Sempere

7 months ago

Progress update

What progress have you made since your last update?

In short, the project has overall been going well. The idea was to have two components, the first of which was a foresight team that could raise an alarm if something happens. This foresight team is going great; I have three very obsessive, very competent forecasters, in addition to myself, and some tooling to aid them.

The emergency response team has been going well, but less so. It exists, it has some competent people, and we had a trial run with the Iran attacks on Israel.

But in general I just feel much better about our ability to, say, detect a Chinese invasion of Taiwan two weeks, or a few days before it happens, than I feel about our ability to do anything about it.

What are your next steps?

Some steps on the horizon:

  1. Improve emergency response team.

  2. Integrate more info sources.

  3. Put out analytical pieces sharing lessons learnt

  4. Reach out to potential collaborators and similar projects

  5. Consider finding more funding

Is there anything others could help you with?

  1. Introduction to potential emergency response team members. For details see https://sentinel-team.org/emergency_response_team/

  2. Mentorship seems like it would be super useful to me. Are you one or two levels above me in life; have you set up something cool and want to share pointers? I'd be grateful.

  3. I'm currently conflicted about funding. I'd appreciate help either with acquiring more, or with deciding that it's a distraction.

    • Funding is not the bottleneck on the, say $5k to $10k range, but funding on the $100k to $5M range would allow me to make this project more awesome.

    • I'm thinking that I prefer a smaller project that is sustainable ~forever, over a larger project that lives or dies by [large funder]'s word. But is this a good way to think about it? And even if it is, should I instead attempt to build something that shines twice as bright but lasts twice as long?

    • I'm procrastinating on applying to the SFF grant round. Partly this is because I find the application baroque. Help, or just coworking on it, would be appreciated

    • On the other hand, this project is sustainable at the current spend, so looking for more funding feels like a distraction.

    • I'm currently ~not really paying myself. I'm probably fine with this until the end of this year, though. Is this a good move?

Thoughts on Manifund

I am very grateful to Manifund.

  • Writing the project proposal and getting early funding was important for coordinating between people interested in supporting the project.

  • Getting early funding from peers, from people whose respect I cherished, was important for me psychologically. It made me more excited. It was a hard to fake signal of promisingness

  • Early funding has been useful to not have money be a bottleneck.

NunoSempere avatar

Nuño Sempere

about 1 year ago

I continue to be excited about RCGs and its role in the EA Spain/LatAm communities.

I was waiting until end of year to see if I found more promising options. I was considering APART (https://manifund.org/projects/help-apart-expand-global-ai-safety-research), but I don't think I'll have time to evaluate it in more depth; still, I've reserved some of my funds potentially for it.

NunoSempere avatar

Nuño Sempere

about 1 year ago

I guess that another way of expressing the above might be that this seems potentially good, but given the large amount of funding it is asking for, it feels like someone should evaluate this in-depth, rather than casually?

NunoSempere avatar

Nuño Sempere

about 1 year ago

This looks shiny to me. I am considering funding it for a small amount.

Pros:

  • Successes and accomplishments seem valuable

  • Proxies look good

  • Writeup seems thoughtful

Cons:

  • I don't understand why other people haven't funded this yet

  • Maybe this application is exaggerating stuff?

  • Maybe the organization adds another step in the chain to impact, and it would be more efficient to fund individual people instead?

  • Maybe the biggest one: how do I know the success is counterfactual? Say that someone participated in a hackathon/fellowship/etc, and then later got a research position in some Oxford lab. How do I know that the person wouldn't have gotten something similarly impressive in the absence of your organization?

NunoSempere avatar

Nuño Sempere

about 1 year ago

I thought bundling was a neat idea. Contra other comments I don't think this would only be valuable if you also solved discounting. And discounting could maybe be achieved by the platform offering to match your returns (on resolved or exited markets).

NunoSempere avatar

Nuño Sempere

about 1 year ago

Main points in favor of this grant

  • Project has some value, may have influenced organizations like Charity Entrepreneurship

  • Project has some learning value as well

Donor's main reservations

  • None.

Process for deciding amount

  • Negotiation with project lead based on expected number of hours.

Conflicts of interest

  • None

NunoSempere avatar

Nuño Sempere

about 1 year ago

I think this project is my bar for funding. If I don't find other projects I'm as excited by, I'm planning to donate my remaining balance to it.

NunoSempere avatar

Nuño Sempere

about 1 year ago

I feel that the project could be potentially valuable, and I hope Marcel will be a bit more ambitious/have a bit more runway and leeway. I feel that there could be room for more funding, but I'd want some specific commitments in exchange.

NunoSempere avatar

Nuño Sempere

about 1 year ago

@vandemonian Are you currently constrained by more funding? Do you have the capacity to put in more effort if you get more funding?

NunoSempere avatar

Nuño Sempere

about 1 year ago

Funding this. I like the lumenator part, but I particularly like the more ambitious life trajectory point.

On your application, you mention:

returning the money left if I decided that this was not a good idea anymore

Please consider not doing this; rather, please either pivot to a better opportunity or keep it until a good opportunity arises.

NunoSempere avatar

Nuño Sempere

about 1 year ago

Overall I don't really understand the biosecurity ecosystem or how this would fit in, so I'm thinking I'm probably a bad funder here. Still, some questions:

  • Do you already have some decision-makers who could use these estimates to make different decisions?

  • How valuable do you think that this project is without the long covid estimate?

  • Who is actually doing this work? Vivian and Richard, or Joel and Aron?

  • Why are you doing stuff $3.6k a time, rather than having set up some larger project with existing biosecurity grantmakers?

NunoSempere avatar

Nuño Sempere

over 1 year ago

Could you say a bit more about why this beats your counterfactual?

NunoSempere avatar

Nuño Sempere

over 1 year ago

I have too many conflicts of interest to fund this myself, but here are some thoughts:

I like thinking of Nathan's work in terms of the running theme of helping communities arrive at better beliefs, collectively. And figuring out how to make that happen.

On the value of that line of work:

- I have a pretty strong aversion to doing that work myself. I think that it's difficult to do and requires a bunch of finesse and patience that I lack.

- I buy that it's potentially very valuable. Otherwise, you end with a Cassandra situation, where those who have the best models can't communicate them to others. Or you get top-down decisions, where a small group arrives an opinion and transmits it from on high. Or you get various more complex problems, where different people in a community have different perspectives on a topic, and they don't get integrated well.

- I think a bottleneck on my previous job, at the Quantified Uncertainty Research Institute, was to not take into account this social dimension and put too much emphasis on technical aspects.

One thing Nathan didn't mention is that estimaker, viewpoints and his podcast can feed on each other: e.g., he has interviewed a bunch of people and got them to make quantified models about AI using estimaker: (Katja Grace: https://www.youtube.com/watch?v=Zum2QTaByeo&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=8, Rohit Krishnan: https://www.youtube.com/watch?v=cqCYMgEnP7E&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=10, Garett Jones: https://www.youtube.com/watch?v=FSM94rmJUAU&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=4, Aditya Prasad: https://www.youtube.com/watch?v=rwTb7VgSZKU&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=6). This plausibly seems like a better way forward than the MIRI conversations https://www.lesswrong.com/s/n945eovrA3oDueqtq.

Generally, you could imagine an interesting loop: viewpoint elicitation surfaces disagreements => representatives of each faction make quantified models => some process explains the quantified models to a public => you do an adversarial collaboration on the quantified models, parametrizing unresolvable disagreements so that members of the public can input their values but otherwise reuse the model.

I see reason to be excited about epistemic social technology like that, and about having someone like Nathan figure things out in this space.

NunoSempere avatar

Nuño Sempere

over 1 year ago

I think that RCG's object-level work is somewhat valuable, and also that they could greatly contribute to making the Spanish and Latin-American EA community become stronger. I think one could make an argument that this doesn't exceed some funding bar, but ultimately it doesn't go through.

Transactions

ForDateTypeAmount
Fund Sentinel for Q1-20254 days agoproject donation+1000
Fund Sentinel for Q1-202512 days agoproject donation+25
Fund Sentinel for Q1-202515 days agoproject donation+200
Fund Sentinel for Q1-202525 days agoproject donation+500
Fund Sentinel for Q1-202525 days agoproject donation+400
Fund Sentinel for Q1-202527 days agoproject donation+100
Fund Sentinel for Q1-202527 days agoproject donation+25
Fund Sentinel for Q1-202530 days agoproject donation+100
Fund Sentinel for Q1-2025about 1 month agoproject donation+250
Fund Sentinel for Q1-2025about 1 month agoproject donation+100
Manifund Bankabout 1 month agowithdraw24910
Fund Sentinel for Q1-2025about 1 month agoproject donation+2000
Fund Sentinel for Q1-2025about 1 month agoproject donation+800
Fund Sentinel for Q1-2025about 2 months agoproject donation+50
Fund Sentinel for Q1-2025about 2 months agoproject donation+100
Fund Sentinel for Q1-20252 months agoproject donation+100
Fund Sentinel for Q1-20252 months agoproject donation+350
<89b29643-1793-4dec-8713-59b3c13edb86>2 months agoprofile donation+100
Fund Sentinel for Q1-20252 months agoproject donation+15
Fund Sentinel for Q1-20252 months agoproject donation+5000
Fund Sentinel for Q1-20252 months agoproject donation+29
Fund Sentinel for Q1-20252 months agoproject donation+500
Fund Sentinel for Q1-20252 months agoproject donation+10000
Fund Sentinel for Q1-20252 months agoproject donation+200
Future-Proofing Forecasting: Easy Open-Source Solution2 months agoproject donation50
Fund Sentinel for Q1-20253 months agoproject donation+100
Fund Sentinel for Q1-20253 months agoproject donation50
Fund Sentinel for Q1-20253 months agoproject donation+50
Fund Sentinel for Q1-20253 months agoproject donation+1000
Fund Sentinel for Q1-20253 months agoproject donation+100
Fund Sentinel for Q1-20253 months agoproject donation+200
Fund Sentinel for Q1-20253 months agoproject donation+500
Fund Sentinel for Q1-20253 months agoproject donation+1000
Fund Sentinel for Q1-20253 months agoproject donation+500
Fund Sentinel for Q1-20253 months agoproject donation+200
Fund Sentinel for Q1-20253 months agoproject donation+500
Fund Sentinel for Q1-20253 months agoproject donation+100
Fund Sentinel for Q1-20253 months agoproject donation+1000
Play money prediction markets4 months agoproject donation50
CEEALAR4 months agoproject donation350
Make ALERT happen4 months agoproject donation+216
Make ALERT happen5 months agoproject donation+100
Make ALERT happen5 months agoproject donation+50
Make ALERT happen5 months agoproject donation+100
Manifund Bank5 months agodeposit+700
Manifund Bank8 months agodeposit+5001
Manifund Bank9 months agoreturn bank funds10000
The Base Rate Times12 months agoproject donation1500
Support Riesgos Catastroficos Globalesabout 1 year agoproject donation12500
Manifund Bankabout 1 year agowithdraw18000
Make ALERT happenabout 1 year agoproject donation+950
Make ALERT happenabout 1 year agoproject donation+2050
Update Big List of Cause Candidatesabout 1 year agoproject donation1000
Make ALERT happenabout 1 year agoproject donation+5000
Make ALERT happenabout 1 year agoproject donation+5000
Make ALERT happenabout 1 year agoproject donation+5000
A Lumenator Company, or: A More Ambitious Life Trajectoryabout 1 year agoproject donation5000
Support Riesgos Catastroficos Globalesover 1 year agoproject donation20000
Manifund Bankover 1 year agodeposit+50000