I think the point that LLMs, even with their current capabilities, make malicious attacks cheaper and more accessible to people outside of the government makes a lot of sense. I'd be curious to see a more expansive explanation of how you expect this to be destabilizing and how this affects your probability estimates of existential risk/what it means we (or more precisely, Open Philanthropy) should be doing differently.
Basically, in your opening you say you're going to address the probability of existential risk through a national security lens, and throughout the essay you talk about AI and national security (which may well be a neglected and useful perspective, I don't see much of that from the AI safety community!), but the conclusion on existential risk is basically "this will be destabilizing to all of human activity", which feels a bit unsatisfying?
(I'd probably take you up on your sell offer if the conclusion addressed this more concretely)