Love to see this grant! Would be excited to contribute to an extension of this grant if first 5 weeks seem promising to Rob
Empirical research into AI consciousness and moral patienthood
Robert Long
Project summary
5-week salary to further empirical research into AI consciousness and related issues bearing on potential AI moral patienthood.
What are this project's goals and how can they be achieved?
Write a strategy doc for research on AI consciousness, and related issues relevant to AI moral patienthood, and elicit feedback.
Spend more time on empirical research projects about AI consciousness, and (as before) related issues relevant to AI moral patienthoodâhenceforth, Iâll just say âAI consciousnessâ as shorthand.
How will this funding be used?
Time working on the project goals that would otherwise be used for public-facing work (e.g. speaking to journalists, writing magazine articles, appearing on podcasts) and applying for funding and jobs.
Who is on the team and what's their track record on similar projects?
The salary is for Rob Long, an expert on AI consciousness. See Robâs newly-released report on AI consciousness here. Rob just completed the philosophy fellowship at the Center for AI Safety, and before that he worked on these issues at the Future of Humanity Institute as the head of the Digital Minds Research Group. He has a PhD in philosophy from NYU, supervised by David Chalmers.
What are the most likely causes and outcomes if this project fails? (premortem)
Some personal (e.g. family or health) issue taking up Robâs time.
Rob failing to ward off his time to be taken up by other work priorities.
Failing to properly scope the project goals.
What other funding is this person or project getting?
None.
Joel Becker
about 1 year ago
Main points in favor of this grant
I have been trying to nudge Rob in this direction since earlier this year.
Earlier this year I was involved in a startling conversation. Rob Long had been speaking about the chances that we will see conscious AIs in the coming years. (And I had started to grok this possibility.) Now, he was talking about research collaborations he might aim for in future. Rob had empirical projects in mind that could only be done with access to frontier models. Should he bug a colleague-of-colleague to work at [top AI lab X]? Should he ask [collaborator Y at top AI lab Z] about the possibilities at his employer? Robâs conclusion was: not right now. Rob already had his plate full with other work, the request might be annoying and, besides, Rob had already had a similar request to similar people declined-ish a couple of months ago.
This situation struck me as being preposterous. Here is one of the worldâs top experts on AI consciousness, claiming a nerve-wracking chance of AI consciousness in the not-too-distant future, with fairly strong professional links at top AI labs and ~shovel-ready ideas for empirical projects, preparing a not-terribly-costly ask (give me a work desk, ~0 engineering time, and model access to work on these research questions)... and he is unsure whether he should make the ask?!
It seemed to me that the right question to ask was more like âshould I try to start a research group as soon as possible?â. (Of course there are many reasons why starting a research group might be a bad idea. But, even if that were the case, Rob should at the very least be asking to work at places that would enable him to work on his empirical projects.)
I want Rob to move the needle on empirical AI consciousness projects harder and faster. In the short-term (see below), this means doing less âpublic-facing work and thinking about his professional opportunities,â and more âthinking through a theory of change for AI consciousness research, spending more time on empirical research with existing collaborators (e.g. Ethan Perez), and pushing for ways he can continue this research in the near future.â
Donor's main reservations
First, I donât think Rob needs funding in some sense. But Iâm not super concerned about this. People should be paid for important work and, besides, Iâm partly trying to set up a good incentive environment for future grantees.
Second, I think that I can only counterfactually change a fraction of Robâs professional activities. Currently, his work falls under the following buckets:
Co-authoring a paper with Ethan Perez,
Co-authoring a paper with Jeff Sebo,
Responding to media and podcast requests about his recent paper, and other writing related to that paper, and
Job search stuff, applying for funding.
Bucket (1) is the sort of work that I want Rob to be doing more of: activities that directly move the needle on empirical work in his area of expertise.
I instinctively feel less excited about bucket (2), because this paper will not involve empirical AI consciousness research. But I donât want to impose on Robâs pre-existing commitment to this project. Also, the issues of the paper have some overlap with writing a strategy doc. (Though this overlap should not be overstated, as academic work is optimized for different things than a strategy document).
Bucket (3) I think Rob should be doing less of. The public-facing work mentioned above does not obviously move the needle on empirical work â and to the extent it does (e.g. indirectly via field-building or career capital), I would feel better if Rob undertook this work after having reflected more on his theory of change for AI consciousness research, rather than as a natural consequence of the release of his recent paper. And, unlike for bucket (2), giving up on some bucket (3) commitments feels low-downside â Rob is not going to be a less interesting podcast guest in 1 years time!
Bucket (4) feels like a waste of time that I want Rob to avoid.
My understanding is that buckets (3) and (4) add up to a bit less than half of Robâs time at the moment.
Third, empirical projects in AI consciousness feels like a tricky area where I am extremely out-of-my-depth. I am strongly relying on Rob being a âreasonable expert who wonât make dumb decisions that make me regret this grant.â That said, I feel very good about relying on Rob in this way.
Process for deciding amount
Time buckets (3) and (4) add up to 20 hrs/wk * 5 weeks = 100 hours time. Rounding up to 120 (for possible underestimation, and professional expenses), at $60/hour, I will provide $7,200 funding. I'm leaving up to $12,000 as a funding goal, in case anyone wants to fund the remainder of Rob's time during the 5 weeks.
Conflicts of interest
Please disclose e.g. any romantic, professional, financial, housemate, or familial relationships you have with the grant recipient(s).
I was housemates with Rob for a couple of months in early 2023, which is how I found out about this grant.