@Ray
Solving an overlooked root cause of AI risk
https://www.linkedin.com/in/raydelarama/$0 in pending offers
To people working in frontier AI organizations:
You cannot build AI responsibly to benefit humanity while leaving the human values problem unsolved.
The human values problem is a cycle that keeps repeating: people build systems that reward behavior that reflects bad values, and those systems produce bad values in the next generation. AI is now being built on top of those same values. Even if technical alignment is eventually solved, a system that does exactly what people want could still become the most scalable amplifier of unnecessary harm ever built, if the values of the people using it cause them to want things that harm others, even unknowingly or unintentionally.
Technical alignment has been worked on seriously for years. But the human values problem has not, even though it is an equally important problem to solve. The first version of a system designed to solve this exists.
If you are serious about building AI responsibly to benefit humanity, let's work together on solving it.
Read the full argument here: https://www.provensuccess.ai/blog/human-values-problem-root-cause-of-ai-risk
pending admin approval