Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Agent-first research tool for navigating the intelligence explosion

XyraSinclair avatar

Xyra Sinclair

ProposalGrant
Closes February 28th, 2026
$0raised
$500minimum funding
$21,547funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

I am building https://exopriors.com/scry. I'm ingesting everything that seems relevant to the intelligence explosion (Hacker News, what I can of twitter, Bluesky, arXiv, thousands of personal blogs, etc.), getting text embeddings for what I can afford, and giving researchers arbitrary readonly-SQL access to it. With the Structured Query Language query optimizer, many indexes for performance, generous run times (10+ minutes when server load isn't high), compositional vector support (e.g. tracking temporal deltas of debias_vector(@guilt_axis, @guilt_topic) which is not looking up documents with terms related to guilt, but tracking how guilt vibes have changed over time without overindexing on language like "guilt"), and Claude Code rapidly firing off nuanced queries in succession to answer the users' questions, this enables an unprecedented research experience.

In particular, this is a powerful tool for genuinely updating your worldview, because it's not just about finding really nuanced examples, but often, not finding them even when you do exhaustive search. Negative evidence becomes a viable first-class primitive.

Providing this functionality should be enough, but I am additionally working on ways people's agents can emit knowledge over this corpus. In particular I am adding support for community-memoized multi-objective rerankers and pages tracking different arbitrary attribute axes people want to track. For instance, if someone is interested in AI safety content that is "exceptionally conscientious, novel, and technical" and thinks they have a prompt where Opus 4.5 groks that pretty well, I will help them rank the exact sorts of entities they are interested in by that trait, and optionally email any subscribers to that objective function when that top-k ranking changes. Basically, agent communication is exceptionally possible via the datatype rater::attribute_prompt::A::B::ratio.

What are this project's goals? How will you achieve them?

This project's goal is to ship quality, canonical epistemic infra, and uplift people in the software intelligence explosion. I will further achieve that by maintaining and expanding operations including:

  • continuing to pay for my large Hetzner server;

  • contacting more EA-adjacent and AI safety orgs and helping them improve their research workflows with Scry;

  • ingesting terabytes more important sources;

  • adding features and increasing the ergonomics over real-world use cases;

  • using Scry for small acts of direct action. e.g. an efficient rating harness can rate every EA-adjacent post/comment on a number of sources by attributes like perhaps "open-threadedness likely already answered elsewhere" and "openness to being helped", and for the top entities, agents can be spawned to deep research across the community for likely helpful links, with exceptionally high-quality thresholds for bot replies, or perhaps a staging page where people desiring to be helpful can themselves handle recommendations for responding to people and even improve auto-generated algorithms used in generating and vetting help-recommendations. Bottom line is that slop research is still a possibility, but with SQL+vectors+rerankers+other techniques, people have the foundational tools they need to leverage AI in high-powered ways that go beyond slop.

Usage Examples

We interface with text all day long, SOTA agents writing SQL+vector+rerank queries is the most expressive paradigm for agentically traversing that text. If I sound defensive, it's because I am: thinking of the best diverse example queries that show off the full power of Scry, without overindexing people on use cases that may not be exciting to them, is a completely different mindset than building Scry. However I've provided some use cases here: https://forum.effectivealtruism.org/posts/z7AKFmJeZjZvbjwkt/announcing-scry-a-research-tool-over-arxiv-ea-forum-etc-w With a query like "find me the most serious documents about civilizational refuges, and anyone who could be able to be rapidly mobilized to help with that.", Claude gave me these results:

If you want it to be way more agentic about creatively thinking hard about generating queries to find people, like 100 combinational lexical matches that could hit to definitely match people with uncommon military, service, or physical backgrounds like sports, you can ask for that. Or be like "Hey Claude, I'm disturbed you didn't notice Xyra Sinclair in the corpus for helping with civilizational refuges. They have a West Point and Army Diver background and EA affiliation, I'm pretty sure their tweet history is included, deeply improve your querying to metabolize these lessons, and tell me what changes you make. It also seems like there's a lesson to be metabolized here into the Scry prompt. Use the /feedback tool to mention this pain point and perhaps suggest an improvement."

With this said, I do intend to be getting in contact with more AI and bio safety orgs to help specifically improve their research workflows, and have just begun shipping a ranking page for most notable queries, where people can easily submit usage examples and rate the best queries with quadratic voting.

How will this funding be used?

I'm currently raising for 5 months of operations and growth.

I need 5 months of a large Hetzner server (extreme price efficiency, del). $1547/month * 5.

I need competitively priced embeddings over ideally ~250 billion tokens. I will continue to be very conscientious about what gets embedded (vs the default of just relying on lexical search). Voyage-4-lite at $.02/M means I'm asking for $5000 here.

I will have a stipend of $3000 per month. I think I deserve a lot more for the time and expertise building this and ecological value added, but I'm used to hardship and would prefer a higher chance of this project getting funded.

Who is on your team? What's your track record on similar projects?

Just me. I shipped https://exopriors.com/scry within 2 weeks of being unlocked by Opus 4.5, and the project has just been continuously improved for the community while on an extremely limited budget. I have a long history of getting obsessed with epistemic infra projects for months to years at a time--this isn't a passing fad for me.

What are the most likely causes and outcomes if this project fails?

The most likely failure mode is insufficient adoption relative to its straightforward potential. I am increasingly going to market it, iterate with user feedback, and help people understand some of the very interesting questions it's able to answer. If the project fails to answer thousands of nuanced user questions and connect hundreds of people counterfactually, I guess it will be because we didn't market it to enough people well enough, to overcome the other shiny creations being created in the software intelligence explosion.

How much money have you raised in the last 12 months, and from where?

$5k VC angel check, not for Scry but for ExoPriors.

CommentsOffers

There are no bids on this project.