Roadmap

What we're building, in what order, and what gates each phase. No hard dates — phases advance when conditions are met, not on a calendar.

How we build

Every feature passes the three-question test before it goes live:

1. Is it transparent? Does it explain, or obscure to create intrigue?

2. Does it point through? Does it lead toward evidence, or keep you on-site?

3. Is it invitational? Can you read it and close the tab with no friction?

Every financial mechanism passes four additional checks:

4. Opacity: Functional abstraction only — no engineered complexity.

5. Response: Signals the problem, not the participant.

6. Attention: Points at the work, not at the money.

7. Termination: Has a designed end — no infinite engagement loops.

Money is a void. The framework says so. So the project scores its own financial design against the same diagnostic it uses on everything else. Full disclosure →

Active

Alpha — Ship the diagnostic

Get the core tools and research live. Static site, no accounts, no tracking.

Gate to Phase 2: Site live. At least one paper or article published externally. Score tool used by people who didn't build it.
Next

Phase 2 — Validation and traction

Other people use the framework. We measure whether it works when the author isn't running it.

Gate to Phase 3: Independent replication of at least one core result. Inter-rater kappa > 0.7 with non-author scorers. Peer review submitted.
Governance Triggers

When governance changes happen

The project starts as a solo founder operation. That's an honest 6/12 on the self-score. These are the specific, measurable conditions that trigger governance transitions. No condition, no change. No calendar — thresholds.

The principle: Governance decentralizes when evidence justifies it, not when popularity demands it. Each threshold is measurable, published before the condition is met, and visible in the git history if changed.
Future

Phase 3 — Scale

If the framework survives independent validation, build infrastructure for widespread use.

Gate: Framework accepted for peer review at a top AI safety venue. Independent teams using the diagnostic without author involvement.
Ongoing

Research — always running

The framework has 22 falsification conditions. Any one of them could kill it. We're looking.

What we won't build

We're keeping the site simple and transparent by design. No engagement tricks, no retention mechanics, no financial mechanisms that fail the four checks.

Want to break the framework? Bounties pay 2x for counter-examples.
Track progress: github.com/AnthonE/morr-papers