You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
To people working in frontier AI organizations:
Even if technical alignment is eventually solved, the problem does not end there.
AI that does exactly what people want is still dangerous when the things people want cause unnecessary harm to others. If the values of the people using AI cause them to want things that harm others, even unknowingly or unintentionally, more people get hurt, faster, than was ever possible before.
This is the root cause of AI risk that every other solution leaves unsolved. I call it the human values problem: a cycle that keeps repeating itself where people build systems that reward behavior that reflects bad values, and those systems produce bad values in the next generation.
You cannot build AI responsibly to benefit humanity while leaving the human values problem unsolved.
Read the full argument here: The Human Values Problem: Why is No One Solving the Root Cause of AI Risk
The goal for now is to build enough momentum toward good values becoming the norm before AGI arrives, such that AGI finishes the work, rather than locking in the bad values that currently dominate most human systems.
We're doing this by:
Pursuing a direct partnership with Google DeepMind to access the compute and skilled teammates needed to solve the human values problem at the quality and scale it actually requires.
Building an AI system to help business owners, as the first target audience, scale their business by making better decisions that solve their biggest problems, while documenting verified proof that good values produce better outcomes.
During the six-month window, I'm focused entirely on getting in front of the right people at Google DeepMind and making the case for why this partnership matters. Our CTO, James, is iterating on the MVP and getting early users onto the product.
This funding covers three things in order of priority. First, operational costs to keep our AI system running. Second, six months of stipend for me and James to relieve financial pressure. Third, travel to connect directly with Google DeepMind. If operational costs exceed projections, the stipend and travel money will be redirected toward covering all operational costs until we get to work with Google DeepMind.
Minimum and full funding explanation:
With $25,000, we cover our documented and projected operational costs to keep our AI system running. This is the minimum needed for the business to survive.
With $50,000, we add six months of stipend for me and James, plus $10,000 for travel to connect directly with Google DeepMind.
If operational costs exceed projections, the stipend and travel money will be redirected toward covering all operational costs until we get to work with Google DeepMind.
I'm Ray Dela Rama, founder and CEO of Proven Success, established in February 2025. James Hizon is our CTO and co-founder. He designed and built the first version of the AI system from scratch.
The foundation of everything we are building is laid out in this essay: The Human Values Problem: Why is No One Solving the Root Cause of AI Risk
I don't have traditional credentials in AI-related work. What I have is close to three decades of living inside broken systems, refusing to adopt the values those systems rewarded, and eventually understanding why those systems keep failing clearly enough to build a company around fixing it.
We don't have time to solve the human values problem by ourselves before AGI arrives. That is why partnering with Google DeepMind is the most effective path forward.
The most likely cause of failure is that the partnership with Google DeepMind doesn't materialize within the six-month window. Everything in this project depends on that partnership. Without it, we don't have the compute or the talent to build an AI system at the quality and scale the problem requires. A secondary cause is running out of runway before that partnership happens, which is exactly what this funding is designed to prevent.
If the project fails at the project level, we have to stop working on this full-time. We don't have time to build everything from scratch before AGI arrives.
If the project fails at the broader level, the human values problem remains unsolved going into AGI. The window to shape what AGI learns from closes. AGI arrives trained predominantly on data from a world that rewards taking, domination, and short-term gain at any cost. Every problem that bad values have always produced gets amplified at a speed and scale that has never been possible before.
We have not raised any money in the last 12 months. We have been funding this entirely out of our own pockets.