CompleteGrant
$32,500raised

Project summary

  • Consult our Brochure

  • We want to advance four lines of work related to improving global risk management.

First, we want to promote the report we co-wrote with ALLFED on Food Security during Abrupt Sun Reducing Scenarios (ASRS) among policymakers. We are writing a chapter on ASRS for the National Risk Management Plan 2024-2030 of Argentina.

Second, we want to promote our report on improving the implementation of the EU AI Act in the European sandbox in Spain and connect regulators in the sandbox to auditing experts such as ARC Evals and Apollo.

Third, we want to promote our report on biological surveillance in Guatemala, highlighting some cost-effective opportunities for engagement such as news surveillance. We also seek to improve our map of BSL laboratories in Latin America with information provided by the community.

Fourth, we want to write a report on improving risk management in Latin America from the perspective of GCRs. This ties in with our previous work on risk management in Spain, and would focus on the use of cost-benefit analyses as a tool for risk prioritization and analysis.

What are this project's goals and how will you achieve them?

For each of our projects, the goal is to advance the creation of relevant policy and connect policy-makers to relevant experts. We will know we are succeeding in a given project if our reports are used as support material for drafting policies or if we are credited with connecting policy-makers and relevant experts.

SPECIFIC PROJECT GOALS:

  1. FOOD SECURITY

    • Goal: The Argentinian government invests in assessment reports and planning guidance to improve the chances of an adequate response to nuclear winter

    • How we know we achieved it: report on response strategies incorporated into governmental agencies, the government includes the recommendations in the national five-year risk management plan. (In fact, we are working on a chapter for the 2024-2030 plan).

  2. AI

    • Goal: Incorporate critical policy recommendations contained in the attached report such as external auditing and red-teaming into Spanish AI legislation, ideally influencing other countries inside Europe via the AI sandbox or even outside Europe through the Brussels effect.

    • How we know we achieved it: The government promotes the amendments based on our proposals to the EU AI Act inside the European Parliament.

  3. BIORISK

    • Goal:  Incorporate policy recommendations in the National Epidemiological Surveillance System (SINAVE) of the Guatemala government.

    • How we know we achieved it: The health ministry and other government offices promote the modification of the manuals and procedures of SINAVE.

  4. Improving Risk management 

    • Goal: Promote the improvement of risk management plans at the Latin American level from the perspective of GCR.

    • How we know we achieved it: with the integration in the discussion of GCR terminology and the improvement in ECLAC's risk management indicators.

How will this funding be used?

Concept

Percent

Wages 

65%

Conferences and travel

4.2%

Other operational expenses (People Operations, Workshop expenses & Miscellaneous)

16.4%

Margin of operations

8.4%

Fiscal sponsorship fee

6%

Who is on your team and what's your track record on similar projects?

Our staff is made up of a project coordinator, 3 policy transfer officers, a scientific diplomat, an operations manager, 2 research affiliates, and a network of experts. Our interim director Jaime Sevilla is also director of Epoch, and Director board member Juan García works as a research manager at ALLFED. Between them, they have ample experience with management, founding organizations, and GCR policy. We also have a close relationship with ALLFED, the Simon Institute, and CSER (Centre for the Study of Existential Risk).

Here are some relevant past outputs of the organization:

  1. An investigation of risk management in Spain culminated in a collaboration agreement with Madrid’s city hall.

  2. A report on  Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS). We are currently writing a chapter on ASRS for the National Risk Management Plan 2024-2030 of Argentina.

  3. A Report on AI regulation for the EU AI Act Spanish sandbox. We have published two excerpts as standalone articles: 1) Proposals for the AI ​​regulatory sandbox in Spain, and 2) a Survey of specific risks derived from artificial intelligence.

  4. A report on biological surveillance in Guatemala. and a map of BSL laboratories in Latin America.

  5. Some other minor outputs include an article about the new US GCR law, and the co-drafting with the Simon Institute of two responses to the policy briefs from the UN’s Our Common Agenda. Also published infographics of the previous reports.

  6. Overall, our website sports the largest collection online of GCR articles in Spanish, with over 50,000 visits.

How could this project be actively harmful?

  • We could be crowding out other efforts from more trustworthy sources, though we consider this unlikely and we are committed to collaborating with complementary efforts.

  • Some of our policy recommendations might be harmful and hard to correct. We are however erring towards recommendations with broad support and backed up by other experts in the GCR community, which should diminish the chances. We also focus more on creating connections and facilitating infrastructure for evidence-based decision-making than on pursuing specific agendas.

What are the most likely causes and outcomes if this project fails? (premortem)


What other funding are you or your project getting?

  • We are currently operating through a grant from Effective Venture’s donor lottery, a small grant from Nonlinear and other individual donations. 

JTorrescelis avatar
Final report

Executive summary

2024 has been a year of significant growth and consolidation for Observatorio de Riesgos Catastróficos Globales (ORCG), marked by impactful advancements across our four core workstreams. We've expanded our research initiatives and strengthened international collaborations, solidifying our role as a leading voice in understanding and mitigating catastrophic risks.

From the outset, ORCG has championed proactive and collaborative approaches to navigating the complex challenges of potential global catastrophes. This commitment fueled a diverse range of activities in 2024, with highlights including:

These endeavors have not only deepened our understanding of global risks but also fostered crucial partnerships with organizations and experts worldwide. We've collaborated with leading institutions and initiatives such as ALLFEDGlobal ShieldCSERGCRICLTR, and AMexBio. Furthermore, by participating in several working groups of the EU GPAI Code of Practice, we've engaged with several civil society organizations in the field of AI safety and governance. Our work has also involved stakeholder engagement with OECDEuroHPCOEIAESIAIMSSUNAM, and UANL. We are working on defining collaborative projects across our workstreams with some of these institutions, starting in 2025.  We invite you to stay tuned for our updates next year.

In this recap, we'll delve into the key activities and accomplishments that have shaped ORCG's 2024 journey. We'll explore the impact of our research, the progress made in our key programs, and the valuable connections we've forged. Join us as we review our milestones—especially our key products—that have defined this chapter in our ongoing commitment to safeguarding humanity's future.

Looking ahead, securing comprehensive funding remains a critical challenge. While our AI initiatives are funded through 2025, other vital areas face resource limitations that could jeopardize our continued progress. We invite you to partner with us by donating to help sustain our work across all areas of our mission.

Products 

 

Reports

ORCG in press

Academic papers

  • The EU AI Act: A pioneering effort to regulate frontier AI? This paper, published in IberamIA journal, examines the EU AI Act, the first attempt to regulate frontier models. It concludes the Parliament’s draft was a good step toward adequately addressing the risks posed by these models, though some of its provisions were insufficiently defined in some areas and lacking in others. The final version of the Act improved in many of the aspects outlined.

  • "Systematic Review of Risk Taxonomies Associated with Artificial Intelligence": This article systematically reviewed 36 studies on AI risks, resulting in a taxonomy of threats and risk vectors. Our research found a need to consider emerging risks, bridge gaps between present and future harms, and further explore the potential pathways to an AI catastrophe

  • "Resilient Food Solutions to Avoid Mass Starvation During a Nuclear Winter in Argentina": This research explores potential food sources and production methods that could help ensure food security in Argentina during a nuclear winter scenario, describing how timely food system adaptation could flip a situation of national famine to a situation in which the country could not only feed itself but also continue to make significant exports to neighbors.

  • “The Securitization of Artificial Intelligence: An Analysis of its Drivers and Consequences”; submitted to Revista de Estudios Sociales by Universidad de los Andes. This article analyzes how the US is framing AI as a national security issue, revealing tensions between politicization and securitization, national and global security concerns, and threat-based versus risk-based approaches. It argues that effective AI governance requires balancing national interests with global security, favoring a risk-based approach that acknowledges uncertainties and promotes multilateral solutions instead of focusing solely on threats and nationalistic competition.

  • “Training and Education: What are the essential elements necessary for biosafety and biosecurity training programs for researchers and professionals?”, in submission process. To address the needs and challenges identified by Latin American researchers and professionals, effective biosafety and biosecurity training programs must prioritize formalized competency-based training, continuous professional development, accessibility, practical application, and intersectoral collaboration.  By incorporating these elements, training programs can empower individuals to mitigate biological risks, foster responsible conduct, and strengthen regional biosecurity and public health.

Policy Brief

Research notes

Events presence

Join us in building a safer and more resilient future. Your contribution to ORCG will help us address the full spectrum of global catastrophic risks and ensure that humanity is prepared for the challenges that lie ahead.

Donate to ORCG today and help us protect humanity's future. Read here about our current plans.   

In addition to financial contributions, you can support ORCG by:

  • Spreading the word: Share this Recap and our website (https://orcg.info) with your networks.

  • If you are an expert interested in our work: you can contribute by sending suggestions, providing feedback, and sharing academic and funding opportunities with us, among others (info@orcg.info).

  • Connecting us with potential partners: Introduce us to individuals or organizations who can support our work.

Together, we can build a safer future for all.


JTorrescelis avatar
Progress update

2024: a year of consolidation for ORCG

Executive summary

2024 has been a year of significant growth and consolidation for Observatorio de Riesgos Catastróficos Globales (ORCG), marked by impactful advancements across our four core workstreams. We've expanded our research initiatives and strengthened international collaborations, solidifying our role as a leading voice in understanding and mitigating catastrophic risks.

From the outset, ORCG has championed proactive and collaborative approaches to navigating the complex challenges of potential global catastrophes. This commitment fueled a diverse range of activities in 2024, with highlights including:

These endeavors have not only deepened our understanding of global risks but also fostered crucial partnerships with organizations and experts worldwide. We've collaborated with leading institutions and initiatives such as ALLFEDGlobal ShieldCSERGCRICLTR, and AMexBio. Furthermore, by participating in several working groups of the EU GPAI Code of Practice, we've engaged with several civil society organizations in the field of AI safety and governance. Our work has also involved stakeholder engagement with OECDEuroHPCOEIAESIAIMSSUNAM, and UANL. We are working on defining collaborative projects across our workstreams with some of these institutions, starting in 2025.  We invite you to stay tuned for our updates next year.

In this recap, we'll delve into the key activities and accomplishments that have shaped ORCG's 2024 journey. We'll explore the impact of our research, the progress made in our key programs, and the valuable connections we've forged. Join us as we review our milestones—especially our key products—that have defined this chapter in our ongoing commitment to safeguarding humanity's future.

Looking ahead, securing comprehensive funding remains a critical challenge. While our AI initiatives are funded through 2025, other vital areas face resource limitations that could jeopardize our continued progress. We invite you to partner with us by donating to help sustain our work across all areas of our mission.

Products 

 

Reports

JTorrescelis avatar
Progress update

Reports

  1. Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS),  DOI: 10.13140/RG.2.2.11906.96969.

  2. Artificial intelligence risk management in Spain, DOI: 10.13140/RG.2.2.18451.86562.

  3. Proposal for the prevention and detection of emerging infectious diseases in Guatemala, DOI: 10.13140/RG.2.2.28217.75365.

  4. Latin America and global catastrophic risks: transforming risk management DOI: 10.13140/RG.2.2.25294.02886.

 

Papers 

  1. Resilient food solutions to avoid mass starvation during a nuclear winter in Argentina, REDER Journal, Accepted, pending publication.

  2. Systematic review of taxonomies of risks associated with artificial intelligence, Analecta Política https://doi.org/10.18566/apolit.v14n26.a08

  3. The EU AI Act: A pioneering effort to regulate frontier AI?, Journal IberamIAhttps://doi.org/10.4114/intartif.vol27iss73pp55-64

  4. Operationalizing AI Global Governance Democratization, submitted to call for papers of Office of the Envoy of the Secretary General for Technology, Non-public document.

 

Policy brief and work documents

  1. RCG Position paper: AI Act trilogue.

  2. Operationalising the definition of highly capable AI.

  3. PNRRD Argentina 2024-2030 chapter proposal “Scenarios for Abrupt Reduction of Solar Light”, *Published as an internal government document.

 

Collaborations

  1. [Simon Institute] Response to Our Common Agenda Policy Brief 1: “To Think and Act for Future Generations”.

  2. [Simon Institute] Response to Our Common Agenda Policy Brief 2: “Strengthening the International Response to Complex Global Shocks – An Emergency Platform” .

  3. [Simon Institute] Response to Our Common Agenda Policy Brief 5: A Global Digital Compact — an Open, Free and Secure Digital Future for All.

  4. [EPOCH]  Paper “AI capabilities can be significantly improved without expensive retraining”, published in arXiv.

 

Outreach

Web articles

  1. Global catastrophic risks law approved in the United States.

  2. Proposals for the AI Regulatory Sandbox in Spain.

  3. A survey of concrete risks derived from Artificial Intelligence.

  4. Spanish translation of “How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role”.

  5. Biosafety in BSL-3, BSL-3+ and BSL-4 Laboratories: Mapping and Recommendations for Latin America.

 

Video

  1. Presentation Food Security in Argentina in the event of an ASRS.

  2. Presentation  Risk management of Artificial Intelligence in Spain.

  3. Presentation Detection and prevention of emerging infectious diseases in Guatemala.

 

Infographics

  1. Infographics of the Report food security in Argentina in the event of an Abrupt Reduction of Sunlight Scenario (ASRS).

  2. Infographics report risk management of Artificial Intelligence in Spain.

  3. Infographics: report Proposals for the prevention and detection of emerging infectious diseases (EID) in Guatemala.

  4. Infographics report: Latin America and global catastrophic risks: transforming risk management.

 

Events

  1. EAGx Berlín, Presentation “Regional Diversity in Global Catastrophic Risk and AI risk Management”, Guillem Bas.

  2. CTS Ecuador, Presentation “Classification and analysis of risks derived from artificial intelligence” Monica Ulloa.

  3. Workshop “Pluralism in Existential Risk Studies”, Oxford, Monica Ulloa.

  4. Adevinta Spain Meetup, Barcelona, Presentation “AI Governance: Opportunities and Challenges”, Guillem Bas.

  5. ESCT Colloquium, Colombia: “Measuring and valuing disasters: a path towards global resilience”, Mónica Ulloa.

 

Awards

  1. Santander–CIDOB 35 under 35 list, Guillem Bas.

What are your next steps?

This year we will be focused on AI and the generation of risk register exercises for this area and other GCRs.

Is there anything others could help you with?

We invite you to send any questions and/or requests to info@orcg.info. You can contribute to the mitigation of Global Catastrophic Risks by donating.

donated $32,500
NunoSempere avatar

Nuño Sempere

about 1 year ago

I continue to be excited about RCGs and its role in the EA Spain/LatAm communities.

I was waiting until end of year to see if I found more promising options. I was considering APART (https://manifund.org/projects/help-apart-expand-global-ai-safety-research), but I don't think I'll have time to evaluate it in more depth; still, I've reserved some of my funds potentially for it.

donated $32,500
NunoSempere avatar

Nuño Sempere

about 1 year ago

I think this project is my bar for funding. If I don't find other projects I'm as excited by, I'm planning to donate my remaining balance to it.

Austin avatar

Austin Chen

over 1 year ago

Approving this! Nuno called this out as one of the projects he was most excited to fund in his regrantor app, and I'm happy to see him commit to this.

donated $32,500
NunoSempere avatar

Nuño Sempere

over 1 year ago

I think that RCG's object-level work is somewhat valuable, and also that they could greatly contribute to making the Spanish and Latin-American EA community become stronger. I think one could make an argument that this doesn't exceed some funding bar, but ultimately it doesn't go through.

JTorrescelis avatar

Thank you very much Nuño for your contribution, we hope to continue improving the prioritization and management of GCRs in Spanish-speaking countries and we will be in contact to update you with promising advances in our work.