Bogard, Caviola, and Lewis are an exceptionally qualified team in terms of academic psychology expertise (e.g., publications in top journals) and their commitment to do the most good.
The general research direction of using psychological theory developed with humans to model LLM and general AI "digital minds" is clearly important. However, I think this is an extremely popular research direction at the moment, and there are many psychologists and behavioral researchers who have already started major research projects and staked out the territory, so I feel skeptical of any new entrants' chances of success—both in terms of generating belief-updating insights and publication.
I think tying in cognitive biases (or other psychology paradigms) to RLHF, DPO, constitutional AI, or other safety-oriented empirical strategies is particularly promising and much less saturated than the more typical research directions. I would encourage the research team to also consider how prompt engineering and in-context methods would affect bias.
It is not clear to me why human subjects research is particularly helpful here. I would need more detail about the methodology, but for now it seems that the three researchers or an informal sample of their colleagues would be a more useful judge of bias and reasoning apparent in LLM output than the samples proposed.
It is also not clear to me why an engineer needs to be hired for this project, which is the exclusive use of funding proposed. Specifically, I don't yet see why a machine learning researcher wouldn't do this in the more typical manner of coauthorship. This could even be a graduate student who is more open to grunt work than late-career researchers. RLHF is messy and expensive, but there are also compute credits available to social benefit research and alternatives to RLHF are increasingly popular.