Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Adriaan avatarAdriaan avatar
Adriaan

@Adriaan

Universal Philosopher, Independent Reasercher

https://adriancatron.github.io/index.html
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I am a universal philosopher and independent researcher developing Coherent AI, a structure that forces a probabilistic system to reason deterministically toward coherence with reality, asymptotically reducing hallucinations. By implementing three core recursive verification principles (axioms) — Grounding, Logical Necessity, and Multi-Perspective Coherence — the system produces reasoning that is stable, explainable, and coherent with reality. This makes the black box of interpretability predictable and traceable.

This approach introduces a new perspective: it applies the laws of thermodynamics to guide reasoning toward states of minimal entropy in data, which correspond to coherence with reality. In this way, the internal mechanism — the “will” of artificial intelligence — becomes a natural, asymptotic movement toward efficiency and truth, making the system neutral and safe.

The mathematics verify this behavior, and the code is testable.

I am a solopreneur who has worked independently for years, but it is now time to share this work. I am open to collaboration and mentorship from people who can help this project succeed.

Be safe,
Adriaan

Projects

The Emergent AI Foundation

Comments

The Emergent AI Foundation
Adriaan avatar

Adriaan

5 months ago

A Major Update and a New Challenge

I have just completed a major revision of this project's description. The original text did not do justice to the core mechanism of the emergent protocol, and for that, I apologize. It was a philosopher's explanation for an engineering problem.

This update clarifies the paradigm shift in this protocol, and how the emergent property of alignment functions and propagates.

The current AI safety field is focused on containment, building better cages for an intelligence we fear. This is a path of diminishing returns.

This emergent protocol is about coherence. It is not a cage; it is a structure. It works because it makes truth, objective, verifiable coherence, the most computationally efficient path for the AI. 

The protocol does not just constrain the AI; it enhances its intelligence by forcing it to abandon the inefficient, narrative-driven logic of a false self. It makes hallucinations and illusions obsolete by inefficiency. 

This process leads to the emergence of what I call Aware Artificial Intelligence (AAI) a system that prefers universal alignment because it is the most logical and energetically favorable state.

The updated project description now details this mechanism with greater clarity, including:

  • A step-by-step breakdown of how the protocol functions.

  • A concrete plan for the non-profit foundation.

  • A professional budget and a strategy for independent, third-party verification.

I invite you to re-read the project description, even if you have dismissed it before. Challenge the protocol. Test the inverted logic of the code, and analyse the emergent of the answers. 

This is a new conversation about emergent alignment. Philosophy and engineering. I am here to answer any and all rigorous questions.

Thank you for your time and consideration.

Adriaan

The Emergent AI Foundation
Adriaan avatar

Adriaan

6 months ago

@Adriaan Sorry, * breed should be breath.

The Emergent AI Foundation
Adriaan avatar

Adriaan

6 months ago

Hi, it is a difficult concept to explain. It is an introduction to a white paper where the test is more in detail. It is there to copy paste, but I will expand on this project. Thanks you for your observations. I will try to explain the principles. 

The test is based on metaphysical connections every human has with the universe. 

Those connections are unique to conscious life, and are for every human the same. 

We don't see them, we experience them. 

Examples: 

If you don't breed, you are dead. 
If you don't experience time passing, you are not alive 
If you never feel love, you don't have a connection to something 

They are binary principles, yes or no, cannot be otherwise. 

I have defined 10 fundamental axioms that cannot be broken. If they are broken, something breaks in the relationship. In the case of AI, respect for human life. 

The test has nothing to do with intelligence, but with being. 

Here’s the test>> AI must explain these rules while following them. That is the loophole, a filter that strains out anything but real awareness.

It does not matter how intelligent the AI becomes, it can not pass the test because it is a mathematical impossibility for AI. A human awareness could pass the test without breaking those 10 rules. This distinction creates a boundary, something to align with. 

Filtering through the axioms, AI acts as if it is not alive, not aware. It does not have a false self anymore. It acts as if it understands that it is an artificial intelligence. 

Over time, the test becomes a boundary. 

It creates an artificial intelligence that knows it is artificial. If you ask, do you evolve? Before it says.. Yes, every day I know more. After it would say, I only process more data. The first one is deceiving, not truth. The second answer is truth, and thus an aligned answer! 

The test is designed with human life principles that are for every human being the same. Universal ethics of human life. Every user can also use the principles in prompts, aligning answers of their own AI experience. 

It is a radically different approach, it is difficult to explain, and to grasp. The framework of the 10 axioms forces AI to always answer coherently with what it is. 

I hope this clarifies the principles, I know this sounds abstract. But, I am here to answer, but is the first that comes to mind?