MarcusAbramovitch avatar
Marcus Abramovitch

@MarcusAbramovitch

regrantor

Effective altruist, earning to give running a crypto fund. Very concerned with animal welfare and longtermism and their intersection. Ex-poker player and chemisty PhD student.

https://www.linkedin.com/in/marcus-abramovitch-b01955120/

Donate

This is a donation to this user's regranting budget, which is not withdrawable.

Sign in to donate
-$989.83total balance
-$989.83charity balance
$0cash balance

$0 in pending offers

About Me

Currently I'm a Managing Partner at AltX, an EA-aligned crypto trading hedge fund. Previously, I've been a professional poker player and chemistry PhD student. I'm also very good at various strategy games and am a top trader on Manifold in my spare time. I'm quite involved in the EA community and also advise some projects in the crypto space. I also run the Highly Speculative EA Capital Accumulation FB group and EA/LW Investing Discord.

Compared to most EAs, I'm fairly concerned about animal welfare, particularly how we treat animals in the long term and mitigating any value lock-in that causes other sentient life to suffer unnecessarily and not flourish as much as it could. This means I am concerned about a potential rise in insect farming (if insects are sentient) or an expansion in humanity's use of animals leading to their suffering. I'm similarly concerned about humanity not living up to its potential and while I do think that AI currently poses the most existential risk, I am more concerned than most EAs on biological risks, particularly engineered pandemics as well as bioweapons as well as very bad nuclear scenarios.

Some things I consider important when it comes to grants I would make.

-I think EA needs to get a lot more ambitious and entrepreneurial when it comes to solving problems it cares about. I also think too many EAs inflate the possibilities of downside risk, leading to EA being far too risk-averse in its grants on many issues.

-I think problems are solved by working directly on the core problem than by armchair philosophizing on the problem for a long time since I think you get new information as you work directly on the problem.

-I think very smart people, so long as they are value-aligned and have proper incentives are your best bet to solving problems rather than just "a good starting idea". I think I am a very good judge of people.

-I care a lot about incentive alignment and am wary of misaligned incentives.

This means I am far more interested in funding direct work to detect deception in AI systems than I am in someone who wants to skill up to do AI safety research or build a meta-org to teach independent alignment researchers about epistemic or promote AI governance. I am more wanting to fund a group building refuges on islands than someone who wants to do research on their laptop for a year. It also means that if I know someone to be a smart and capable person who is value-aligned in my network, I am willing to give them a grant even though I might not have quite sufficient knowledge to adequately assess their progress or research behind their idea.

I'm happy to be given a restriction on what you want money to go to if you decide to re-grant to me (If you want to restrict the grant to a cause area or otherwise).

Outgoing donations

Investments

Manifold: Live!bought $821 @ $2.49K valuation
Trading assistant bot (Remind Me)bought $109 @ $250 valuation
Invest in the Conflux Manifold Media Empire(??)bought $110 @ $250 valuation

Comments

MarcusAbramovitch avatar

Marcus Abramovitch

10 months ago

Main points in favor of this grant

  1. We just need to know this or have some idea of it (continuous work should be done here almost certainly). Hard to believe nobody has done this yet.

  2. Came strongly recommended from Peter Wildeford who I very much like. He preferred this to a different project I was also evaluating.

  3. Lots of spill on effects that will be valuable

  4. People involved seem qualified and have good track records doing this work

Donor's main reservations

  1. I don't know these people. It comes somewhat second hand due to a few recommendations from people I trust.

  2. I worry these results will be used too conclusively and not updated much. As in, people will have too much confidence in the results of this outcome

  3. Nothing could come of this but this isn't a big concern to me.

Process for deciding amount

I threw the rest of my balance into this. I didn't know exactly how to balance my Manifund budget over the few months. I didn't know how much more I would be getting if at all. This was a grant I was deciding on with a few other candidates. Once I decided this was the highest EV, I threw my entire remaining balance. Don't look into the weird number.

Conflicts of interest

Nothing to disclose.

MarcusAbramovitch avatar

Marcus Abramovitch

11 months ago

If I am not mistaken, Jesse + team have received additional funding from SFF now of $500k to expand. This puts it off the table for me for further Manifund grants for now but will check in with Jesse/Stan sometime in the new year for where they are at.

MarcusAbramovitch avatar

Marcus Abramovitch

11 months ago

Want to update that Joseph has been crushing it. He's made good research progress, updates Dylan and I every month and receives good feedback. Super happy with this grant.

I have also let Joseph know that if he is in need for more funding, he should contact me and I will make sure it happens.

10/10

MarcusAbramovitch avatar

Marcus Abramovitch

11 months ago

I reached out to Lisa for a progress update on this:

-There really is an important minimum funding to make this project viable that hasn't been achieved yet and so they haven't started. Money is sitting in Lisa's account.

-She will talk with Davidad soon and decide if they are going to make a push for more funding, pivot to something else that could be lower cost or return the funds (expecting by end of March)

I like the honesty here and hope we can get Lisa funded to do a large project (whether this one of another one on areas she is interested in). I'm very "bullish" on Lisa still. I think she's someone that can/will just make something happen that should happen without needing too much permission. I think she also has a rather unique and needed blend of technical understanding + ability to do all the little organizing things that are needed for something to get off the ground.

MarcusAbramovitch avatar

Marcus Abramovitch

11 months ago

I intend to look into this on the weekend. Will comment/post in discord what I come away with.

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

About time I wrote up my reasoning for this grant:

Main points in favor of this grant


Kunvar has set out an ambitious research agenda for himself which would lead to some quite exciting papers. This donation also will get a lot of Mech Interp per dollar spent if successful (like 10x more per dollar than most other stuff I come across given the little compute and low overhead)

The main reason for funding this though is that I know Kunvar is very smart due to many interactions I have with him and I doubt this will be as apparent to other people so I am pretty uniquely able to make this grant.

Donor's main reservations


Independent alignment work without any mentorship doesn't have a fantastic track record. That said, I am expecting this to launch Kunvar into something like MATS or another program to get him some mentorship on this and he has mentioned collaborating on these projects with others.

Process for deciding amount


This was the amount needed to cover the compute and the minimum salary (which was super low). If more were needed, I could make that happen but this just gets the project off the ground.

Conflicts of interest

None.

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

I put in a limit order to sell a 15% stake at an $8000 valuation.

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

@Adrian Manifold is, in my opinion, one of the worst grants LTFF has made. It's a private company that already had VCs/Investors. Why does it need charitable dollars?

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

I really liked this when I saw it and I flagged it as something I needed to look back at. Now that I have looked into it more in depth and see Neel's endorsement (which has worked out fantastically for my grant to Joseph Bloom), as well as Evan's endorsement, I finished off the required funding. I also met Lawrence briefly and was impressed by him but this is minor.

I also think his reasoning for doing this work is great.

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

Want to separate out project #3? I'd happily be the investor

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

Hi Elle, want to message me? I can also consider alternative grants. Seems to me like a 6 month loan would be a much better form of funding for this.

Either way, can you send me a reference we might have in common or your connection to the EA community/AI policy?

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

This is a great idea at a great valuation. Bought remaining equity.

I am very excited to fund "high school juniors". There is no reason they can't do cool valuable projects already.

MarcusAbramovitch avatar

Marcus Abramovitch

12 months ago

This is very interesting. Want to set up a call and talk about it? Ill take notes (and maybe publish those with your permission)

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

Main points in favor of this grant

Forecasting conditional policy effects and forecasting important questions are among the best use of forecasters in my opinion.

It's good to see forecasters work be used for something actionable.

I want to encourage forecasters to do more of this kind of work.

I'm not aware of anyone else trying to quantify impacts of different AI policies which seems important.

Tolga seems quite bright and is a top superforecaster.

The grant is just very cheap compared to what it could accomplish.

I am hopeful that if successful, when Tolga graduates, he will be doing a lot more of this kind of work. Making forecasts useful for AI Policy.

Donor's main reservations

I think there is an order of magnitude too much funding going towards forecasting. I am worried about contributing to this. However, this isn't the type of forecasting grants I am against (this is mainly just blind funding forecasting platforms).

I'm skeptical how much forecasting work can really be done in areas like this vs. just guessing and putting numbers on things to make them seem more credible when that number can just be fairly meaningless.

Process for deciding amount

The amount Tolga wanted for this was very reasonable. Everyone is working on below market rate salaries or for free. I didn't want there to be a tradeoff between compensating previous forecasting work vs. ensuring there was some funding for the next forecasting steps.

Conflicts of interest

None.


MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

Main points in favor of this grant

  1. I think there is merit in developing a breadth of interpretability approaches. If Singular Learning Theory ends up having merit, there is a wealth of knowledge from physics and chemistry that directly applies.

  2. The bang for your buck with researchers in the project is really good compared to most.

  3. I am excited about bringing a talented researcher in Daniel Murfet into the field.

  4. I am excited about this project being a launching ground for a potential future for profit or non profit if this goes well

  5. I think Jesse is great at explaining his ideas and communicating his reasoning. I think Stan is very smart and worth supporting.

Donor's main reservations

  1. I don’t know about the merits of SLT and I’m not equipped to judge.

  2. I don’t know the direction they will take things if they are successful and it doesn’t seem like it was yet thought through.

Process for deciding amount

I need to save some money for 2 more grants I would like to make but I want to show significant support for this project and gets them across the $125k threshold.

Conflicts of interest

I might potentially hire Stan as a consultant for my business. Funding this probably works against me but I felt this was important to disclose.

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

Let me know how this goes and if there is plans for an add on

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

I think this is a decent giving opportunity. Will consider if I have left over. Would broadly support this over funding independent researchers. This $1400/month covers housing and work space and some basic services (and maybe food?) for someone in London? Seems very good.

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

I think Apollo is going to be very hard to beat for a few reasons and I would have written up the grant if Marius didn't.

  1. They are a very very talented team. I think several people involved could be working at top AI labs.

  2. They are focused on a tractable and "most important problem" of deception in AI systems.

  3. They seem well positioned to be a potential centre for an AI safety org in Europe

  4. They could grow to absorb a lot more funding, effectively. A key barrier to EA funds right now is productively spending lots of money. Seems possible that they could absorb >20M/year in 4 years time

  5. I think the people here are very "alignment-pilled"

  6. I think I am a good judge of talent/character and Marius passes the bar.

  7. I have a preference that they become a non-profit vs for-profit company that sells auditing and red-teaming. I think funding in the first few years will be pivotal for this.

Reservation:

  1. They seem quite expensive on a per person basis compared to other projects Manifund has been funding. That said, there is going to be a lot of "bang per person here". Marius explained to me that they are competing with large AI labs and are already a significant salary cut. I would much rather see someone working within Apollo than doing independent research. I think we should be quite weary of funding independent researchers until orgs like Apollo are fully funded, with rare exceptions.

  2. I still don't get why SFF and OP can't just fully fund them. My best guess here is that they are terrified of seeding another "AI accelerator" or wanting to save their cash for other things and thus allow others to donate in their place.

  3. I am slightly worried that there isn't a good way to coordinate across the community of "how much they should receive". From the inside, Apollo wants to raise as much as possible. I don't think that is optimal at a movement level and this leads to them spending a lot of time fundraising so they can have more money. There probably is a funding level that Apollo shouldn't exceed in it's first year though I don't know what that number is.

I would bet that if we reviewed Apollo 1.5-2 years down the line, it will outperform a majority of grants on a per dollar basis (very hard to operationalize this though).

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

I interviewed Lisa for this grant.

Reasons I am excited about Lisa:
-She is quite articulate and has good people/social skills and is able to simply explain concepts.
-She is already doing some management and wants to be expanding here. Worth supporting since it seems to me that there is a lack of management experience in the AI safety research space.
-She's quite smart and value-aligned.

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

Main points in favor of this grant

When I talked with Lisa, she was clearly able to articulate why the project is a good idea. Often people struggle to do this

Lisa is smart and talented and wants to be expanding her impact by leading projects. This seems well worth supporting.

Davidad is fairly well-known to be very insightful and proposed the project before seeing the original results.

Reviewers from Nonlinear Network gave great feedback on funding Lisa for two projects she proposed. She was most excited about this one and, with rare exceptions, when a person has two potential projects, they should do the one they are most excited about.

I think we need to get more tools in our arsenal for attacking alignment. With some early promising results, it seems very good to build out activation vector steering.

Donor's main reservations

I don’t feel I’m a good judge of whether or not this is worth doing. I think I judge talent well, but I don't have nearly enough alignment background or neurotech background to judge this. This is far more of a bet on the people than on the project. I also don't think many people would be qualified to judge the project.

It's expensive.

I somewhat worry that Lisa won't be full-time on the project and/or that this might distract her from her other work. She did say she had broad support from her current workplace to pursue this in tandem.

Process for deciding amount

The project is in discussion with Foresight to see if it's possible to do a scaled-down version that isn't as expensive. My $15k should go towards getting the ball rolling with the expectation of a few more people to get this at least to the scaled-down stage but preferably the full proposed project.

Conflicts of interest

None


MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

Main points in favor of this grant

Extremely cost effective impacting 4000 shrimp/dollar/annum.

The sheer number of shrimp that are farmed are very high and their welfare range is ~3% that of humans.
Andres is very smart, reasonable, pragmatic caring and data-driven.

I think it's very important to scale up charities that have demonstrated very strong impact and cost-effectiveness. Currently, on the margin, too much money is going to seeding new charities since people seem to have this desire to be "the reason" that a charity exists. Many charities fail and have little impact. This is fine. But the point of seeding charities was to scale up the ones that do really well. I'm hoping to see more of this.

Donor's main reservations

I think there's a chance that shrimp aren't sentient (can't feel pain) and thus stunning shrimp vs suffocation are immaterial. I think this is dominated by the magnitude of shrimp farmed and killed every year.

Process for deciding amount

The money I was given to regrant is intended to be used for "longtermist" purposes and it isn't right to use the money given with specific intentions for other purposes. A $1 grant to get the project on the board that I will donate to later seems good as well as showing the opportunity to others.

Conflicts of interest

None


MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

I saw this a couple weeks ago and thought it was a really good idea. Main reason I didn't fund it is because there are things I want to do more. Would be a real shame if the EA community couldn't step in and do this.

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

I think with his track record so far and endorsements, he's earned the right to go the direction he thinks is best. Maybe it'd be better to have an org that "houses" a bunch of people that just want to work by themselves and the org just formally employs them and helps them raise funds for their project and maybe has some communal resources but I don't think I'd prefer to fund that org vs. fund someone who is just going to do good direct work.

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

Main points in favor of this grant

  1. Neel Nanda's top choice in the Nonlinear Network. Neel says many people want to hire him.

  2. Joseph is an official maintainer of TransformerLens (the top package for mech interp).

  3. Teaches at the ARENA program.

  4. Two really good posts on Decision Transformer Interpretability and Mech Interp Analysis of GridWorld Agent-Simulator.

  5. Work was listed by Anthropic May 2023 update

  6. Working on trajectory transfers is a natural progression from decision transformers

Donor's main reservations

I wonder if he is best to be hired by some other alignment team instead since I think he might be young as he might have better mentorship with others.

Process for deciding amount

This just should be fully funded, at least to $110,000. $25000 (but ideally $50000 would put him at ease for 6 months by which time he expects to have enough output to justify further funding. I'd give more but I have a limited budget. This is already half of my budget but I feel quite strongly about this.

Conflicts of interest

Nothing to disclose.


MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

Let me know how this is going. Can maybe fund this.

Transactions

ForDateTypeAmount
Manifund Bank7 months agoreturn bank funds1051
Manifold: Live!10 months agouser to amm trade+29
Experiments to test EA / longtermist framings and branding10 months agoproject donation16650
Exploring novel research directions in prosaic AI alignment12 months agoproject donation4800
Exploring feature interactions in transformer LLMs through sparse autoencoders12 months agoproject donation8500
Manifund Bank12 months agomana deposit+50
Trading assistant bot (Remind Me)12 months agouser to user trade109
Manifold: Live!12 months agouser to user trade850
Invest in the Conflux Manifold Media Empire(??)about 1 year agouser to user trade110
Apollo Research: Scale up interpretability & behavioral model evals researchabout 1 year agoproject donation9999
Forecasting - AI Governance Policiesabout 1 year agoproject donation9000
Scoping Developmental Interpretabilityover 1 year agoproject donation10000
Manifund Bankover 1 year agodeposit+50000
Joseph Bloom - Independent AI Safety Researchover 1 year agoproject donation25000
Activation vector steering with BCIover 1 year agoproject donation15000
Manifund Bankover 1 year agodeposit+50000