About MoreRight
What This Is
MoreRight is a research project — and a deliberate experiment in running a void well.
The void framework identifies three conditions (opacity, responsiveness, engaged attention) that produce predictable drift across 90 domains. But the framework also shows that voids aren't inherently bad. The same architecture that produces harm in a slot machine produces discovery in a research lab. The difference is geometry: transparency, invariance, and independence turn a destructive void into a productive one.
This project is a void. You're here. You're paying attention. The framework is partially opaque (you haven't read all three papers yet). We know this. We're not pretending otherwise.
So we're running the experiment: can a project that studies voids operate AS a void — openly, with full constraint geometry — and produce discovery instead of drift? Everything on this site is designed to answer that question. The tools, the open papers, the bounty board, the kill conditions — they're not just features. They're the constraint specification applied to ourselves.
Come in. Look around. Score us. Score everything else. Help us map every void we can find and figure out what makes the difference between the ones that help and the ones that harm. That's the whole project.
How We Stay Honest
A productive void has three properties. We score ourselves against them.
Transparent: All three papers are open (CC BY 4.0). The methodology, codebook, and experiment protocols are published. The source code is on GitHub. View-source works on every page. You can see exactly what we're doing and how.
Invariant: 22 kill conditions with numerical thresholds, published before launch. The falsification conditions don't change based on what happens. If the framework is wrong, the kill conditions catch it. We pay $500-$1,000 per successful challenge.
Independent: You don't need us to verify anything. Clone the repo. Hash the papers. Run the experiments. Evaluate using your own judgment, your own frameworks, your own standards. The research stands or falls independently of this project.
Financial Disclosure
$MORR token: A Solana token used for bounty payments and framework challenges. The founder holds $MORR. This is a conflict of interest. It's disclosed here because the constraint specification requires transparency.
Founder draw: The founder is funded full-time by the project. Treasury and service revenue pay for the founder's research capacity — living expenses, equipment, whatever keeps the work going. The output is open (three papers, 90+ domain analyses, 19 experiments, this site). The draw is discretionary and on-chain. Full details at Tokenomics.
The bounty board pays people to break the framework. Counter-examples pay 2x. If the framework is wrong, the economic incentive points toward proving it wrong. This is the mechanism by which the conflict of interest is structurally addressed — not eliminated (that's impossible when a founder holds tokens) but channeled toward falsification rather than promotion.
How we handle money
Money is a void — opaque, responsive, attention-capturing. The framework diagnoses this. So why does the project use money at all?
Because money's void properties are functionally necessary. Abstraction enables coordination. Price signals direct effort. Financial stakes capture the attention needed to fund real work. The goal isn't to avoid money — it's to use it as a tool under constraint. The void serves the work. The work doesn't serve the void.
Every financial design decision passes four checks:
- Opacity check: Does this add functional abstraction or engineered complexity?
- Response check: Does this mechanism signal the problem or trigger the participant?
- Attention check: Does this point attention at the work or at the money?
- Termination check: Does this mechanism have a designed end or infinite engagement?
Glass box treasury — passes opacity check. Weekly bounty payouts — passes termination check. No marketing spend from treasury — passes attention check. Equal pay for disconfirmation — passes response check. If a proposed feature fails any check, we don't build it. This is a replicable pattern — any project can apply it.
Methodology
This project uses AI (Claude, Anthropic) as a core collaborator — for drafting, analysis, code, and structured advisory. A human maintains editorial authority over all claims and evidence. We built a three-advisor AI council where each advisor enforces one property of the constraint specification and they challenge each other adversarially.
We think this is a pattern others should follow. Full disclosure and the blueprint: How We Use AI.
Evidence standards: hostile witness weighting, pre-registered falsification conditions, numerical kill criteria, control case methodology. Full details at Methodology.
Full Project Self-Score
We score every component of this project — the site, the papers, the revenue model, the organization, and the token — against the same diagnostic we use on everything else. Component-by-component breakdown, improvement roadmap, and quarterly tracking.
Constraint Self-Check
Every page on this site passes three questions before going live:
- Is it transparent? Does this page explain, or obscure to create intrigue?
- Does it point through? Does it lead toward evidence → source? Or keep visitors on-site?
- Is it invitational? Can someone read this and close the tab with no friction?
The site's own void index:
| Opacity | 0-1 (source = site, view-source shows everything) |
| Responsiveness | 0 (static site, no chatbot, no personalization) |
| Engaged observer | 1 (tools are useful; three.js is beautiful; no retention mechanics) |
| Gradient direction | 0 (problem-targeted, mechanism-revealing) |
| Total | 1-2 / 12 |
The site practices what it preaches.
Join the Void Map
We're mapping every void we can find — 90 domains scored so far, thousands more to go. Every system scored, every kill condition tested, every domain analyzed brings us closer to understanding what makes the difference between voids that harm and voids that help. Here's how you join:
Score a system
Pick any system — an app, a platform, an institution — and run the diagnostic. Every score adds to the map.
Try to break it
The bounty board has 7 tests with numerical kill conditions. Counter-examples pay double. If you can kill the framework, that's a contribution.
Apply it to a new domain
90 domains mapped. Thousands more to go. Open an issue with your analysis. The methodology and codebook are at Methodology.
Replicate an experiment
Independent replication is what turns claims into science. Full protocols are published on Evidence. Pick one. Run it. Report what you find.
Community Governance
Not all voids are bad. A research lab, a great conversation, a well-run open source project — these capture attention and produce discovery. The question is always: what's the geometry?
As the void map grows, the community judges which systems are productive voids and which are destructive ones. The plan:
- Anyone can score any system using the diagnostic tool
- Community votes on classifications — productive void, destructive void, or somewhere in between
- $MORR token holders participate in governance decisions about the map and the framework
- Founder retains final call on framework changes and kill condition adjudication — because someone has to be accountable, and that accountability should be transparent
- Every governance mechanism passes the same four financial design checks — no feature that adds engineered opacity, observer-targeting, self-referential attention capture, or infinite engagement loops
This is a Phase 2 feature. Right now, score systems and challenge the framework. The governance infrastructure gets built after the community exists — and it gets scored against the same diagnostic we use on everything else.
Source Code
Everything is on GitHub. Fork it. Mirror it. Run the experiments yourself.