I'll vouch for the quality of the AI Safety Events & Training newsletter.
I guess the main point I'd like clarity on is their plan for increasing distribution of this newsletter.
@casebash
$26 in pending offers
Chris Leong
3 days ago
I'll vouch for the quality of the AI Safety Events & Training newsletter.
I guess the main point I'd like clarity on is their plan for increasing distribution of this newsletter.
Chris Leong
3 months ago
You may want to consider applying to the Co-operative AI Foundation for funding in the future. I don't know if they would go for it, they seem to have a more academic focus, but there's a chance that they would go for it.
Chris Leong
8 months ago
This is a cool project that might help improve the conversation around these issues.
Some people might be worried about hype, but there's already so much hype, the harms are likely marginal.
You may want to consider linking people to an AI Safety resource if you think your site may get a lot of traffic, then again, you might not if you think that'd make people more suspicious of the results.
Another option to consider would be an ad-supported model. I'm not suggesting Google Ad words, but you might be able to find an AI company to sponsor you.
Chris Leong
8 months ago
@casebash I should state my reasoning as it may encourage others to invest.
$2000 minimum is quite reasonable as a bet given your background, plus the quality of the video provided.
Video content is one of EA’s weaknesses. I also imagine this work could likely receive further funding if the first video or videos were done well with would increase its impact.
One thing that would increase my optimism about this project would be a plan to get people from watching these videos to potentially taking action.
Chris Leong
8 months ago
@alexkhurgin I offer to purchase an impact certificate at the default price. Open to negotiating. I mostly selected the default because I’m new to this funding mechanism and I’m still a bit confused by it.
Chris Leong
9 months ago
This is actually a really cool idea which might help people form estimates and convince more people to think about these risks. One worry I always have with projects like this is in relation to maintenance and how much continual updating a project like this would require.
Chris Leong
about 1 year ago
Thanks so much for your support!
Oh, is the minimum locked once you create a post? I was tempted to move the minimum down to $700 and the ask down to $2000, but then again I can understand why you wouldn't want people to edit it after someone has made an offer as that is ripe for abuse.
In terms of why I'd adjust it: I'm trying to figure out what would actually motivate me to try to produce more of this content and not result in a bit of extra money in my pocket without any additional content production. I figure that if there's a 20% chance of a post being a hit, I'd need at least funding for a week* in order for it to be worthwhile for me to spend a full day writing up a post (as opposed to the half-day that this post took me).
In terms of the $2000 upper ask limit, I'm thinking it through as follow: It seems that if someone was able to write ten high-quality alignment posts in a year (quite beyond me at the moment, but not an inconceivable goal), then that'd work out at $20k, and it might be reasonable for writing such posts to be a third of their income.
(PS. I decided to do a quick browse of highly upvoted posts on the alignment forum. It seems that quite a high proportion of highly upvoted posts are produced by people who are already established researchers/phd students, such that if there was a funding scheme for hits** and that scheme was aiming to avoid double funding people, the cost would be less than it might seem).
Anyway, would be great if I could edit the ask, but no worries if you would like it to remain the same.
* My current burn rate is less b/c I'm trying really hard to save money, but this is a rough estimate of what my natural burn rate would be.
•• Couldn't be based primarily on upvotes because that would simply result in vote manipulation and distort people towards writing content that would receive upvotes.
Chris Leong
about 1 year ago
Funnily, enough I was going to reduce my ask here, but I hadn't gotten around it yet, so now it may look like it's in response to this comment when I was going to do it anyway.
Chris Leong
about 1 year ago
You should probably write about how you are and how your participation would benefit AI Safety.
Chris Leong
about 1 year ago
Hey Felipe, I'm currently doing community building at AI Safety Australia and New Zealand and I'm quite interested in decision theory (currently doing an adversarial collaboration with Abram Demski, a MIRI researcher on evidential decision theory). Would be keen to hear if you end up in Australia.
Chris Leong
over 1 year ago
I would be really excited to see the establishment of an AI safety lab at Oxford as this would help establish the credibility of the field which is one of the core problems holding alignment research back.
That said, I suspect that a proper research direction is crucial when establishing a new lab as its important to lead people down promising paths. I haven’t evaluated their proposed directions in detail, so I would encourage anyone considering donating large amounts of money to do so themselves.
Disclaimer: Fazl and I were discussing collaborating on movement building in the past.
For | Date | Type | Amount |
---|---|---|---|
Run a public online Turing Test with a variety of models and prompts | 7 months ago | user to user trade | 250 |
Educate the public about high impact causes | 8 months ago | user to user trade | 224 |
Manifund Bank | 9 months ago | deposit | +500 |