You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I attended ICLR 2026 in Rio de Janeiro (April 23-27, 2026) to present two workshop papers I co-first-authored with my brother Shlok Gilda. Shlok couldn't make the trip -- his PhD defense, Meta Research Scientist interviews, and TA load all landed in the same window -- so I flew solo to present both. I funded the trip on personal savings and am asking for partial retroactive reimbursement.
The two papers:
Structured Abductive-Deductive-Inductive Reasoning for LLMs via Algebraic Invariants — ICLR 2026 Workshop on LLM Reasoning. OpenReview: https://openreview.net/forum?id=zy6HdcsJ9V
The paper operationalises Peirce's tripartite inference (abduction, deduction, induction) as an explicit protocol for LLM-assisted reasoning, using five algebraic invariants (the Gamma Quintet). The strongest of these, a Weakest Link bound, prevents weak reasoning steps from propagating unchecked through inference chains. Verified with 100 properties and 16 fuzz tests over ~10⁵ generated cases. The alignment-relevant piece: a structural check against agentic-AI reasoning chains where one weak premise corrupts the conclusion. Adjacent to scalable oversight and chain-of-thought faithfulness work.
Epistemic Accountability for Agentic Financial AI: The Transformer Mandate and Evidence Lifecycle Management. 2nd ICLR Workshop on Advances in Financial AI (AFA), Paper #96. Workshop group: https://openreview.net/group?id=ICLR.cc/2026/Workshop/AFA
Addresses three failure modes for agentic AI in finance: self-promotion (agents bootstrapping authority by validating their own outputs), trust inflation, and evidence staleness. The central contribution is the Transformer Mandate, a structural constraint requiring that no agent may promote its own epistemic status, enforced through protocol structure rather than prompt instructions. Conservative aggregation is formalised via the Gödel t-norm with five mathematical invariants. The paper discusses EU AI Act auditability and MiFID II algorithmic trading obligations. Self-promotion is an alignment failure mode: an agent reinforcing its own conclusions creates a closed epistemic loop with no external check. The paper formalises a structural fix.
Retroactive grant; both goals already met:
In-person presentation of both papers at their respective ICLR Rio workshops in April 2026. Achieved.
Workshop-community networking and feedback on the underlying First Principles Framework (FPF), the shared methodology behind both papers' Weakest Link / Gödel t-norm aggregation results. The two workshops attract overlapping but distinct audiences (LLM reasoning versus agentic-AI safety in finance), so attending both maximised feedback range. Achieved.
Partial retroactive reimbursement of trip costs. Approximate breakdown (Tampa to Rio, 6 nights):
| Item | Cost |
|------|-----:|
| Round-trip flights (TPA-GIG, 1 stop) | ~$950 |
| Hotel near Riocentro (6 nights) | ~$720 |
| ICLR 2026 registration | ~$850 |
| Brazilian Visa | $150 |
| Local transit + per diem (6 days) | ~$400 |
| Travel insurance | ~$60 |
| Approximate total | ~$3,130 |
Asking for $2,000, roughly two-thirds of the trip cost. The minimum $500 covers the visa plus transit costs alone; the full $2,000 covers flights, registration, visa, and transit, with hotel and per diem remaining out-of-pocket.
Sankalp Gilda. PhD Astrophysics, University of Florida (2021); 5 years industry as Staff MLE; left DeepThought Solutions in January 2026 to work full-time on AI safety infrastructure. Active projects: open-core MCP eval infrastructure (mcp-test-toolkit, public release at PyCon Colombia July 2026) and verity (an MCP server implementing the First Principles Framework that underlies both ICLR papers). GitHub: https://github.com/astrogilda
Shlok Gilda. Co-first author on both papers. Successfully defended PhD this spring; incoming Research Scientist at Meta.
Recent track record: three workshop papers landed in 2026. Two at ICLR Rio (the ones above) plus a third accepted at the ACL-GEM Workshop on April 28. The PyCon Colombia 2026 talk "Your AI Eval Is Lying To You: Statistical Rigor for Non-Deterministic Models" is also under review, submitted April 28.
The project itself can't fail in the conventional sense; the trip is complete and both papers were presented. The funding ask is retroactive.
What "failure" would mean here: the proposal doesn't reach minimum funding ($500) and no reimbursement happens. In that case, the trip costs stay fully out-of-pocket against personal savings, accelerating the runway pressure I'm under. Post-DeepThought, savings are funding full-time AI safety work through summer until LTFF, OpenPhil CDTF, OpenAI Safety Fellowship, and MATS Autumn '26 decisions land.
The downside is purely a personal financial one.
$0 in formal grants. The trip and ongoing AI safety work since January 2026 have been funded entirely from personal savings (post-Staff MLE role at DeepThought Solutions).
Applications currently submitted or in flight:
Constellation Astra Fellowship (submitted)
OpenAI Safety Fellowship (deadline May 3, 2026)
MATS Autumn 2026 (deadline early May)
LTFF (queued)
Open Philanthropy Career Development & Transition Fund (queued)
There are no bids on this project.