23

11th edition of AI Safety Camp

ActiveGrant
$40,030raised
$300,000funding goal

Project summary

AI Safety Camp has a seven-year track record of enabling participants to try their fit, find careers and start new orgs in AI Safety. We host up-and-coming researchers outside the Bay Area and London hubs.

If this fundraiser passes…

  • $15k, we won’t run a full program, but can facilitate 10 projects.

  • $40k, we can organise the 11th edition, for 25 projects.

  • $70k, we can pay a third organiser, for 35 projects.

  • $300k, we can cover stipends for 40 projects.

What are this project's goals? How will you achieve them?

By all accounts they are the gold standard for this type of thing. Everyone says they are great, I am generally a fan of the format, I buy that this can punch way above its weight or cost. If I was going to back [a talent funnel], I’d start here.
Zvi Mowshowitz (Nov 2024)

My current work (AI Standards Lab) was originally a AISC project. Without it, I'd guess I would be full-time employed in the field at least 1 year later, and the EU standards currently close to completion would be a lot weaker. High impact/high neglectedness opportunities are fairly well positioned to be kickstarted with volunteer effort in AISC, even if some projects will fail (hits based). After some initial results during AISC, they can be funded more easily.
Ariel Gil (Jan 2025)

AI Safety Camp is part incubator and part talent funnel:

  • an incubator in that we help experienced researchers form new collaborations that can last beyond a single edition. Alumni went on to found 10 organisations.

  • a talent funnel in that we help talented newcomers to learn by doing – by working on a concrete project in the field. This has led to alumni 43 jobs in AI Safety.


The Incubator case is that AISC seeds epistemically diverse initiatives. The coming edition supports new alignment directions, control limits research, neglected legal regulations, and 'slow down AI' advocacy. Funders who are uncertain about approaches to alignment – or believe we cannot align AGI on time – may prioritise funding this program.

The Maintaining Talent Funnels case is to give some money just to sustain the program. AISC is no longer the sole program training collaborators new to the field. There are now many programs, and our community’s bottlenecks have shifted to salary funding and org management. Still, new talent will be needed. For them, we can run a cost-efficient program. Sustaining this program retains optionality – institutions are waking up to AI risks and could greatly increase funding and positions there. If AISC still exists, it can help funnel people with a security mindset into those positions. But if by then organisers have left to new jobs, others would have to build AISC up from scratch. The cost of restarting is higher than it is to keep the program running.

As a funder, you may decide that AISC is worth saving as a cost-efficient talent funnel. Or you may decide that AISC is uniquely open to supporting unconventional approaches, and that something unexpectedly valuable may come out. 

Our program is both cost-efficient and scalable.

  • For the 10th edition, we received 405 applications (up 65%) for 32 projects (up 19%). 

  • For the 11th edition, we could scale to 40 projects, projected from recent increases in demand on the technical safety side and the stop/pause AI side.

How will this funding be used?

Funding is tight across the board. Without private donors, we cannot continue this program.

$15k: we won’t run a full program, but can facilitate 10 projects and preserve organising capabilities.

If we raise $15k, we won't run a full official edition. 

We can still commit to facilitating projects. Robert and Remmelt are already supporting projects in their respective fields of work. Robert has collaborated with other independent alignment researchers, as well as informally mentoring junior researchers doing conceptual and technical research on interpretable AI. Remmelt is kickstarting projects to slow down AI (eg. formalization work, MILD, Stop AI, inter-community calls, film by an award-winning director). 

We might each just support projects independently. Or we could (also) run an informal event where we only invite past alumni to collaborate on projects together.

We can commit to this if we are freed from needing to transition to new jobs in 2025. Then we can resume full editions when grantmakers make more funds available. With a basic income of $18k each, we can commit to starting, mentoring, and/or coordinating 10 projects. 

$40k: we can organise the 11th edition, for 25 projects.

Combined with surplus funds from past camps (conservatively estimated at $21k), this covers salaries to Robert and Remmelt of $30.5k each.

That is enough for us to organise the 11th edition. However, since we’d miss a third organiser, we’d only commit to hosting 25 projects.

$70k: we can pay a third organiser, for 35 projects.

With funding, we are confident that we can onboard a new organiser to trial with us. They would assist Robert with evaluating technical safety proposals, and help with event ops. This gives us capacity to host 35 projects.

$300k: we can cover stipends for 40 projects.

Stipends act as a commitment device, and enable young researchers to focus on research without having to take on side-gigs. We only offer stipends to participants who indicate it would help their work. Our stipends are $1.5k per research lead and $1k per team member, plus admin fees of 9%.

We would pay out stipends in the following order:

  • To research leads (for AISC10, this is ≤$36k).

  • To team members in low-income countries (for AISC10, this is ≤$28k). 

  • To remaining team members (for AISC10, this would have been ≤$78k, if we had the funds).

The $230k extra safely covers stipends for edition 11. This amount may seem high, but it cost-efficiently supports 150+ people's work over three months. This in turn reduces the load on us organisers, allowing us to host 40 projects.

Who is on your team?

Remmelt – coordinator of 'Stop/Pause AI' projects:

  • Remmelt wrote about the control problem, presented here.

  • Remmelt leads a project with Anders Sandberg to formalize AGI uncontrollability, which received $305k in grants

  • Remmelt works in diverse communities to end harmful scaling – from Stop AI, to creatives, to environmentalists. 


Robert – coordinator of 'Conceptual and Technical AI Safety Research' projects:

Linda will take a break from organising, staying on as an advisor. We can hire a third organiser to take up her tasks.

What's your track record?

AI Safety Camp is primarily a learning-by-doing training program. People get to try a role and explore directions in AI safety, by collaborating on a concrete project.

Multiple alumni have told us that AI Safety Camp was how they got started in AI Safety.

Papers that came out of the camp include:

Projects started at AI Safety Camp went on to receive a total of $1.4 million in grants:
  AISC 1:  Bounded Rationality team   
    $30k from Paul
  AISC 3:  Modelling Cooperation
    $24k from CLT, $50k from SFF, $83k from SFF, $83k from SFF
  AISC 4:  Survey     
    $5k from LTTF
  AISC 5:  Pessimistic Agents     
    $3k from LTFF
  AISC 5:  Multi-Objective Alignment
    $20k from EV, $26k from LTFF
  AISC 6:  LMs as Tools for Alignment
    $10K from LTFF
  AISC 6:  Modularity
    $125k from LTFF
  AISC 7:  AGI Inherent Non-Safety
    $170k from SFF, $135k from SFF
    AISC 8:  Policy Proposals for High-Risk AI    
    $10k from NL, $184k from SFF, $200k from OpenPhil, $200k from AISTOF
    AISC 9:  Data Disclosure
      $10k from SFFsg
    AISC 9:  VAISU
      $10k from LTFF

Organizations launched out of camp conversations include:

  • Arb Research, AI Safety Support, AI Standards Lab.

Alumni went on to take positions at:

  • FHI (1 job+4 scholars+2 interns), GovAI (2 jobs), Cooperative AI (1 job), Center on Long-Term Risk (1 job), Future Society (1 job), FLI (1 job), MIRI (1 intern), CHAI (2 interns), DeepMind (1 job+2 interns), OpenAI (1 job), Anthropic (1 contract), Redwood (2 jobs), Conjecture (3 jobs), EleutherAI (1 job), Apart (1 job), Aligned AI (1 job), Timaeus (2 jobs), MATS (1 job), ENAIS (1 job), Pause AI (2 jobs), Stop AI (1 founder), Leap Labs (1 founder, 1 job), Apollo (2 founders, 4 jobs), Arb (2 founders), AISS (2 founders), AISAF (2 founders), AISL (2+ founders, 1 job), ACS (2 founders), ERO (1 founder), BlueDot (1 founder).

    These are just the positions we know about. Many more are engaged in AI Safety in other ways, eg. as PhD or independent researcher.

    We consider positions at OpenAI to be net negative and are seriously concerned about positions at other AGI labs.

For statistics of previous editions, see here.

What are the most likely causes and outcomes if this project fails?

Not receiving minimum funding:

  • Given how tight grant funding is currently, we don’t expect to be able to run an AISC edition if most funds are not covered here on Manifund.

Projects are low priority:

  • We enable researchers to pursue their interests and get ‘less wrong’. We are open to diverse projects as long as the theory of change makes sense under plausible assumptions. We may accept proposals that we don’t yet think are a priority, if research leads use feedback to refine their proposals and put the time into guiding teammates to do interesting work.

Projects enable capability work:

  • We decline such projects. Robert and Remmelt are aware and wary of infohazards.

How much money have you raised in the last 12 months, and from where?

  • $65.5k on Manifund to run our current 10th edition.

  • $7.5k from other private donors.

  • $30k from Survival and Flourishing speculation grantors, but no main grant. The feedback we got was (1) “I’m a big believer in this project and am keen for you to get a lot of support” and (2) a general explanation that SFF was swamped by ~100 projects and that funding got tighter after OpenPhil stopped funding the rationalist community. 

🦄

Marcel Mir

about 2 months ago

I have a similar very positive experience as some others have left on the comment section.

I came across AISC 9TH edition just after graduating from my LLM in technology and the Law where I did my thesis on the EU AI act transparency challenges (back when it was still a draft).

AISC was my first chance to work on AI governance. The project started as a humble work of developing a simple technical solution to an EU AI Act requirement of dataset transparency for copyright purposes.

My role was leveraging my knowledge on EU frameworks (in particular, copyright, GDPR and European markets) to adapt the solution to the EU legislation and justify its need and legal compliance.

The project grew and ended up being a submission to the AI office stakeholder consultation last summer gaining attraction from artists organisations. We are still in contact with people within the working groups for the Code of Practice to push for our solution. 

AISC was my gate into AI governance. My current work includes providing feedback for the Code of Practice and I will move to Brussels in 2 months to work at the Center for Democracy and Technology.

Bottom line is that, in my opinion and based on other testimonies, AISC is very valuable in kick starting career's in AI safety (in my case in Governance) providing a chance for recent graduate and people shifting their career to proof themselves and show that they can contribute to the field

donated $100
🍩

Tristan Williams

about 2 months ago

Reading more about this, I think there's some reasonable doubt that stipends make sense. But barring that, it seems obvious to me that AISC should get the funding it needs to pay organizers. I participated as a research lead last camp and was able to effectively able to bring my project from an idea into reality.

I likely wasn't experienced enough to have lead a project in SPAR, but thankfully my application was evaluated largely on my idea and not just my experience, so AISC gave it a chance. Without AISC, I think you're missing a crucial step of taking well-read-but-not-experienced people and giving them a chance to get their foot in the door, to do a concrete project which can then be used to further evaluate them for the next step, like SPAR.

No idea what the counterfactual impact is here, but two of my team members went from "quite interested but not working on AIS day to day" to pivoting to focus on AIS. At least one of them told me that the project was a significant part of taking that step.

donated $1,000
Jay_Bailey avatar

Jay Bailey

about 2 months ago

Subjectively, a lot of people who got into alignment seemed to get into it through AI Safety Camp and some of these people have moved on to do good work. In addition, AI Safety Camp is much cheaper to run than the other comparable options I know of like MATS - thus I think it is dollar-for-dollar the most effective way at converting money to AI safety talent that I'm aware of. This is largely based on vibes and not explicit calculation, but either way I think it's definitely worth funding.

donated $1,000
Austin avatar

Austin Chen

2 months ago

I'm making a small donation to signal support of AI Safety Camp. It's clear that they have a product which is broadly popular, has a long track record, and comes well-endorsed. I'm in favor of supporting folks who have historically produced good work, under the principle of retroactive funding; it feels to me that the AISC folks deserve some amount of support from the community.

I do have some hesitations around donating significantly more, though:

  • I don't have an inside view on how good AISC has been, nor do I know the organizers nor many past participants, so my decision here is mostly deferring to what I've read online

  • I'm quite confused why other donors aren't excited to fund AISC. Last time for AISC 10, they ended up raising a fair amount ($60k), but this time it looks like there's less support. Is this because the AISCs have been dropping in quality, as Oli claims? Or just that they've been doing a less good job of "fundraising"?

  • I'm specifically hesitant to fund the stop/pause agenda that Remmelt supports. For one, I don't like the polarization that the stop/pause framework introduces; and if I had to "choose a side" I might very well come down on "AGI sooner would be good"

  • Linda, who I've heard good things about, won't be organizing this time around. (I'm unsure how much to read into this -- it might just be a vicious cycle where the organizers leave for lack of funding, and the funders don't fund for lack of organizers)

None of my thoughts are strongly held, and I could imagine updating to think that AISC deserves much more funding -- again, I only have very shallow takes here.

donated $500
Jason avatar

Jason

2 months ago

@Austin I'm quite confused why other donors aren't excited to fund AISC is a major question mark for me as well, so I hope the organizers will address it to the extent possible. It didn't stop me from offering because my offer will only trigger if others collectively offer about 30x what I did, in which case the question mark becomes somewhat less pressing than it is presently.

remmelt avatar

Remmelt Ellen

2 months ago

Austin, glad to read your points.

I'm quite confused why other donors aren't excited to fund AISC.

This is often confusing, even to us as organisers. Some years we had to go by on little money, and other years we would suddenly get a large grant or an influx of donations.

The lowest-quality editions in my opinion were AISC3 (when I was burned out, and we ran the edition across rooms in a Spanish hotel) and AISC4 (when COVID struck and we quickly switched to virtual). We ran those editions on a shoestring budget. But the year after in 2021, we received $85k from LTFF and $35k from SFP.

Financial stability would help us organise better editions – being able to plan next editions knowing we will have the money. "Stability" is not sexy but it makes the difference between being able to fully dedicate oneself and plan long term as an organiser, and organising on the fly.

Last time for AISC 10, they ended up raising a fair amount ($60k), but this time it looks like there's less support.

Last time, we asked Manifund staff to extend the fundraiser deadline (by 2 weeks), in order to not get cancelled. Looking at the datestamps of my email notifications, donations are coming in faster this time.

Having said that, if any funder here decided not to donate, I'd be curious why!

I'm specifically hesitant to fund the stop/pause agenda that Remmelt supports.

Before getting into this: ~5 stop/pause projects were hosted in 2024 and now in 2025. Our program got about five times as many other projects. The majority of our projects are in theoretical or applied safety.

I'm giving my takes here. Robert will have a different thoughts for the projects he supports.

We might get more stop/pause projects in 2026, which is what makes me most excited as an organiser. I'm also excited about technical projects that enable comprehensive assessments of model safety issues that AI companies have to address.

I'm generally worried about projects that assume it is simple – or somehow obviously doable – to make large machine learning systems safe, because I think it's bad for the community's epistemics. Particularly if alumni end up promoting their solutions to others in the community, or decide to commercialise them for companies, this could support safety-washing. Safety-washing is a way for corporate actors to avoid accountability – it allows them to build dangerous systems and make them look safe, instead of actually scoping their development of systems to be safe. It's counterproductive to AI Governance.

I value folks with a security mindset who are clear about not wanting to make things worse. I'm unsure how much the camp has enabled people to think like that in the past. Some of our alumni even went on to work at OpenAI and DeepMind. So that would be a reason not to donate to us.

Again, these are my thoughts. Robert and Linda will have their thoughts.

For one, I don't like the polarization that the stop/pause framework introduces

Is the polarisation in the framework itself, or in the implementation of it? Curious for your thoughts.

Various respected researchers (eg. Yudkowsky, Yampolskiy, Shovelain) who have been researching the alignment problem for about the longest are saying that we are not track on solving alignment (given rate of development over previous years and/or actually intractable sub-problems of control).

Slowing down AI development helps alignment researchers spend more time working out the problem. It does not have to be polarising, where alignment researchers recognise the need for society-wide efforts to restrict corporate-AI scaling.

Where tensions can occur is if alignment folks indirectly boost work at AGI companies. For example, at OpenAI some alignment researchers have made confident public statements there about being able to make AGI safe, and others have created shallow alignment techniques that made it easier to commercialise products. OpenAI has received $30 million from OpenPhil, and talented engineers advised by 80k to join OpenAI. One start-up dedicated to alignment even offered their state-of-the-art supercomputer to OpenAI. Similar things have happened at DeepMind and Anthropic.

There is a deep question here of whether the community wants to continue to risk accelerating AGI development in the hope of solving all the lethal sub-problems we identified but been unable to solve yet.

if I had to "choose a side" I might very well come down on "AGI sooner would be good"

Why do you think "AGI sooner would be good"? Is the argument that faster development results in fewer competing architectures?

From my perspective, introducing this self-modifying autonomous machinery should be avoided, given the risk of losing all the life we care about on Earth. We should coordinate to avoid it, not only because allowing companies like OpenAI to push the frontiers of dangerous tech and then having other actors (like Google) rush after them is bad. But also because once the tech pushes all workers out of the loop and starts modifying and re-producing itself in runaway feedback loops, then we lose all control. Under such exponential tech processes, mass destruction happens either way. Whether one architecture starts dominating our economy, or multiple architectures that end up interfacing over high-bandwidth channels.

Even if you think the alignment problem could be solved eventually, it seems good to buy time. We can buy time by coordinating with other stakeholder groups to slow developments down. Then we can build the capacity to research the problem more rigorously.

Linda, who I've heard good things about, won't be organizing this time around. (I'm unsure how much to read into this -- it might just be a vicious cycle where the organizers leave for lack of funding, and the funders don't fund for lack of organizers)

I don't want to speak for Linda here, so I asked her to comment :)

remmelt avatar

Remmelt Ellen

2 months ago

And thank you for your donation offers, Austin and Jason. Appreciating your considerations here

🌽

Linda Linsefors

2 months ago

I'm leaving mainly because I'm tired of organising and want's to do other things. This is very normal behaviour for me, and not because there is anything wrong with AISC. After running the same event a couple of times, I stop feeling inspired by the work, and continuing past that point becomes very draining.

I still think AISC is great and would be sad if it ended.


@Austin

🧡
🌽

Linda Linsefors

2 months ago

Waying in paus/stop AI projects.

I'm, in favour of all of the paus/stop AI projects, and I was the one suggesting to list these first on our website to give them an extra signal boost. It dosen't look like are on track to solve alignment on time. This means that AI development needs to slow down. Too few people are speaking up against this. It's good to do this in a minimally polarising way. But not trying to slow down AI don't seem like an option if we want to survive.

Where I disagree with Remmelt is that I think it's also worth trying to solve technical alignment, along side trying to slow down AI. But I do agree with Remmelt that a large part of the AI Safety community is way too friendly with the AI frontier labs.

🌽

Linda Linsefors

2 months ago

@Austin

  • I'm quite confused why other donors aren't excited to fund AISC. Last time for AISC 10, they ended up raising a fair amount ($60k), but this time it looks like there's less support. Is this because the AISCs have been dropping in quality, as Oli claims? Or just that they've been doing a less good job of "fundraising"?

As Remmelt is saying, we're not obviously doing worse than same time last year. However money has been more scares for everyone in AI Safety since the FTX collapse. E.g. LessWrong/Lightcone is currently low on money, and there has been no dropp in quality in their work, as far as I can tell.

I don't want to completely discount Oli's statement but I want to point out that it's one opinion by one person. It's pretty normal for people in AI Safety to feel un-exited about what the majority of people in AI Safety is doing. AI Safety Camp has always had a broad range of directions, so I'm not sure why Oli hasn't seen this as a problem before, but my guess is that the shift in structure caused Oli to shift the way they evaluate us, to more focus on the projects. Or it could be that this particular camp had less projects to Oli's taste. Or it could be that our new format does produces less exiting projects according to Oli's taste. I don't know. It may also be relevant to know that Oli's shift towards not giving grant money to AISC happened at the same time as the big dropp in AI Safety funding.

The way AISC accepts projects is that we have a minimum standard, and if that is met the projects are accepted. Since our current format is highly scalable, projects have not had to compete against each other. Instead we focus on empowering all our research leads to explore what they believe in. I'm not at all surprised that someone looks at our list and thinks most of the projects are not that great, but I would also expect high disagreement on which are the good vs useless projects. I claim that AISC openness to many types of projects is what causes both the small fraction of projects Oli do likes and the ones he doesn't like.

If you're worried that we waste peoples time on the less good projects (which ever you think they are). There are still many more people interested in AI Safety than there are opportunities. I think for many people, working on a sub-optimal project will still accelerate their AI Safety skills and reasoning more than being left to them selves.

If you're worried that some of our projects are actively harmfull. We do evaluate for this, and it's the one point we're most strict on, when deciding to accept projects or not.

donated $500
Lucius avatar

Lucius Bushnaq

2 months ago

@Austin I think it went very similarly last time. They started the fundraiser and pointed at all the glowing alumni reviews. People were skeptical because the usual donors wouldn't fund it. Oliver said he didn't like it. People argued in the comments a bunch.

After talking to Linda more about it, my impression is that AISC has kind of had to deal with enormous skepticism from the usual donor orgs from the very first edition. IIRC before the first edition most people told them doing training programs for AI Safety was unnecessary (that wasn't really a thing back then), and I think skepticism stayed high for most camp editions since then to various extents for varying reasons.

I think their actual results throughout this have kept being incredibly good despite what were often (usually? Basically always?) shoestring budgets.

I dunno. Maybe they're just bad at playing the usual fundraising games.

I still want to hear out what Oliver has to say this time, but I feel inclined to just ignore the position of the usual donor ecosystem on this. I think they've been just been wrong in the same direction over and over and over again here.

donated $500
Jason avatar

Jason

2 months ago

Made an offer based on strong alumni comments below, as well as a well-written argument for past impact above.

🐞

Karl von Wendt

2 months ago

I've been both a participant (2022) and a team lead (2023) at the AISC. In both roles, I have learnt a tremendous amount about AI safety, AI technology, and outreach strategies. I also met brilliant people like Daniel Kokotajlo who has heavily influenced my work and changed the way I look at AI. Thanks to Remmelt, Linda, Robert and all others for making this possible!

JJ-Hepburn avatar

Jason Hepburn

3 months ago

AI Safety Camp was one of the most pivotal moments in my career and likely accelerated my full time work in AI Safety by at least 18 months.
After participating in AISC3 I helped organise AISC4 and then joined Linda Linsefors at AI Safety Support.

Linda and Remmelt's persistence and dedication to AI safety and the people working on it are a huge inspiration to me.

🦋

Sam

3 months ago

At AISC3 I met my collaborators and carried out this project: https://forum.effectivealtruism.org/posts/2tumunFmjBuXdfF2F/survey-on-ai-existential-risk-scenarios-1

I probably wouldn't have done it otherwise, and a number of people found the results of the survey useful.

orpheuslummis avatar

Orpheus Lummis

3 months ago

AISC enables the realization of many projects and connections that would otherwise not happen. I participated in 2024, and am joining again in 2025 as a team lead on a particular project. The yearly camp has as strong track record, but has room for funding, improvement, growth.

donated $200
NellWatson avatar

Nell Watson

3 months ago

AISC has been an incredible wellspring of support for my AI Safety projects over several cohorts. They provide a wonderful opportunity for talented individuals who otherwise would not have funding to work on worthy projects, and AISC offers incredible structure and support to make these projects as impactful as possible. I have made so many connections that have been instrumental to my research.

I cannot recommend AISC highly enough. Their track record of nurturing important work in AI Safety deserves strong, continued support from the community.

donated $500
Lucius avatar

Lucius Bushnaq

3 months ago

AISC6 was fantastic for me. Without it, I think it's likely I wouldn't have ended up employed in AI Safety until a year or more later, and plausible I wouldn't have ended up employed in AI Safety at all. The research project I started on at AISC6 continued for two years after that, first under LTFF grants, then at Apollo Research.

donated $100
🐬

Ariel Gil

3 months ago

I really enjoyed AISC8. I think the barrier for entry for AISC is not too low - projects are varied, some more promising than others, but importantly they give people (like me) A chance to jump right in and try to do useful work in AI Safety.

My current work (AI Standards Lab) was originally a AISC project. Without it, I'd guess I would be full-time employed in the field at least 1 year later, and the EU standards currently close to completion would be a lot weaker. High impact/high neglectedness opportunities (which might not meet a funding bar initially) are fairly well positioned to be kickstarted with volunteer effort in AISC, even if some projects will fail (hits based). After some initial results during AISC, they can be funded more easily.

remmelt avatar

Remmelt Ellen

3 months ago

@ariel_gil, this is informative – adding it to our impact anecdotes list and also as a quote on this page!

donated $100
🌴

Brian E

3 months ago

AISC is a great program. The organizers are thoughtful, supportive, and driven to help develop the field. I very much enjoyed participating, and I found it useful for developing both skills and ideas.

donated $100
🦁

Naveen Arunachalam

3 months ago

I did AI safety camp and it was a great experience to have while doing career exploration.

donated $100
🐶

Kai Williams

3 months ago

I want to echo a lot of what Kanad said. I did AISC this past January, and it has been an entry into AI safety that I don't think I otherwise would have had. Thank you so much for making this program a possibility; it definitely punches above its weight.

donated $50
🦑

Kanad Chakrabarti

3 months ago

I’ve done AISC for 2 years (this is 3rd). It is one of the most accessible & welcoming environments with a wide range of mentors & projects. The organisers are really supportive & really clear on their mission. It seems to be a decent on-ramp for intensive or residency-based programs like MATS, ARENA, etc.

remmelt avatar

Remmelt Ellen

3 months ago

@Ukc10014 It's helpful for us to know too what the value of AISC is for you. Thank you.