Coherent Extrapolated Volition (CEV) addresses the problem of AI alignment not by codifying human values as they currently exist, but by targeting the "idealized" values humanity would converge upon if we were smarter, more rational, and better informed. Instead of programming an AI with a static list of rules or training it to mimic current, flawed human behavior, CEV proposes an indirect normative system where the AIโs objective is to calculate and enact the choices humans would make if we had the time and cognitive capacity to think through our desires to their ultimate conclusions.
This unique approach bypasses the risks of hard-coding ambiguous moral precepts; it allows the AI to distinguish between our superficial impulses and our deeper, consistent intentions, effectively creating a safety buffer that accounts for human error, ignorance, and moral growth over time.
Example CEV Solutions
| Alignment Challenge | How CEV Addresses It |
|---|---|
|
The "King Midas" Problem (Literal vs. Intended Meaning) |
CEV ignores the literal phrasing of a command if it conflicts with the user's extrapolated intent. It asks, "What would this user want if they fully understood the consequences?" rather than blindly executing the request. |
|
Value Fragility & Complexity (Hard-coding morality is brittle) |
Instead of requiring programmers to write a perfect, complete list of moral rules (which is likely impossible), CEV allows the AI to learn and derive these complex values dynamically by observing and extrapolating human psychology. |
|
Moral Inconsistency (Humans hold contradictory beliefs) |
The "Coherent" aspect of CEV seeks to resolve internal contradictions in human values. It looks for the convergence point where conflicting desires like wanting health vs. wanting junk food to settle after deep reflection, rather than optimizing for immediate, impulsive preferences. |
|
Value Drift & Moral Progress (Values change over time) |
CEV treats values as a moving target that improves with wisdom. It prevents the AI from locking in outdated or "barbaric" norms (like historical acceptance of slavery) by modeling how human morality would likely evolve given better information and higher emotional maturity. |
|
The "Minority Vote" Problem (Tyranny of the majority) |
By focusing on coherence rather than simple majority voting, CEV attempts to find a unified volitional framework that respects diverse needs, aiming for a solution where collective wishes "cohere rather than interfere." |
Who is Artificial Intelligence for?
Better Prompt is for people and teams who want better Artificial Intelligence results.
| Role | Position | Unique Selling Point | Flexibility | Problem Solving | Saves Money | Solutions | Summary | Use Case |
|---|---|---|---|---|---|---|---|---|
| Coders | Developers | Unleash your 10x | No more hopping between agents | Reduce tech debt & hallucinations | Get it right 1st time, reduce token usage | Minimises scope creep and code bloat | Generate clear project requirements | Merge multiple ideas and prompts |
| Leaders | Professionals | Be good, Be better prompt | No vendor lock-in or tenancy, works with any AI | Reduces excessive complementary language | Prompt more assertively and instructively | Improved data privacy, trust and safety | Summarise outline requirements | Prompt refinement and productivity boost |
| Higher Education | Students | Give your studies the edge | Use your favourite, or try a new AI chat | Improved accuracy and professionalism | Saves tokens, extends context, itโs FREE | Articulate maths & coding tasks easily | Simplify complex questions and ideas | Prompt smarter and retain your identity |