Roadmap
What we're building, in what order, and what gates each phase. No hard dates — phases advance when conditions are met, not on a calendar.
How we build
Every feature passes the three-question test before it goes live:
1. Is it transparent? Does it explain, or obscure to create intrigue?
2. Does it point through? Does it lead toward evidence, or keep you on-site?
3. Is it invitational? Can you read it and close the tab with no friction?
Every financial mechanism passes four additional checks:
4. Opacity: Functional abstraction only — no engineered complexity.
5. Response: Signals the problem, not the participant.
6. Attention: Points at the work, not at the money.
7. Termination: Has a designed end — no infinite engagement loops.
Money is a void. The framework says so. So the project scores its own financial design against the same diagnostic it uses on everything else. Full disclosure →
Alpha — Ship the diagnostic
Get the core tools and research live. Static site, no accounts, no tracking.
- Score tool — evaluate any system for manipulation risk
- Three research papers (v12.2, v4.1, v6.2)
- Bounty board — 22 falsification conditions, counter-examples pay 2x
- Learn pages — architecture, evidence, safe design
- Articles — framework applied to real-world systems (3 published)
- Vocabulary scorer — check language for drift patterns
- Methodology — codebook, rubric, and protocols downloadable
- Deploy site
- Publish LessWrong article
- Submit Paper 1 to arXiv
- Set bounty amounts ($500 USD per test, paid in $MORR at claim-day rate)
- Write 2-3 more articles
Phase 2 — Validation and traction
Other people use the framework. We measure whether it works when the author isn't running it.
- Independent blind application — someone else applies the diagnostic to a new domain
- Inter-rater reliability at scale — multiple scorers, same systems, measure agreement
- Crowd-sourced scoring — structured template for community domain analysis
- Paper 2 and Paper 3 external submission
- Backend deployment — data collection for submitted scores
- Evidence browser — interactive 90-domain analysis explorer
- Community governance — $MORR token voting on void classifications (productive vs destructive), scored against the same four financial design checks
When governance changes happen
The project starts as a solo founder operation. That's an honest 6/12 on the self-score. These are the specific, measurable conditions that trigger governance transitions. No condition, no change. No calendar — thresholds.
- 100 unique scorers (distinct IPs/wallets submitting void index assessments) → Community advisory input on classification disputes (productive vs destructive). Founder retains final call.
- 1 independent replication of any core experiment by a non-author researcher → External reviewer appointed with public kill-condition authority. Changes governance geometry from single-point to two-point.
- Inter-rater kappa > 0.7 with 5+ non-author scorers → Community scoring accepted into the public database without founder review.
- 3 independent replications by separate teams → Framework stewardship committee (3 members: founder + 2 elected). Committee governs methodology changes. Founder retains veto on kill condition adjudication only.
- $50K annual service revenue → Public treasury dashboard with quarterly operational reporting.
- Peer review acceptance at a recognized venue → Full governance proposal published for community input.
Phase 3 — Scale
If the framework survives independent validation, build infrastructure for widespread use.
- Framework assistant — constrained AI chatbot with full void disclosure
- Automated scoring API — programmatic access to the diagnostic
- Continuous monitoring — track systems over time
- 3D risk map — interactive visualization of scored systems
- Preservation layer — papers on IPFS + Arweave, verifiable hashes
- Structured engagement study — opt-in measurement with human participants
Research — always running
The framework has 22 falsification conditions. Any one of them could kill it. We're looking.
- Kill condition monitoring — tracking all 22 conditions against new evidence
- Domain expansion — testing universality beyond 90 domains
- Productive void mechanics — when the same architecture produces discovery instead of harm
- Cross-domain thermodynamic measurements
- Epistemological implications — what the framework says about how we know things
What we won't build
We're keeping the site simple and transparent by design. No engagement tricks, no retention mechanics, no financial mechanisms that fail the four checks.
- User accounts or profiles — the tool works without them
- Algorithmic feeds or personalization — everyone sees the same content
- Engagement optimization — no dark patterns, no retention hooks
- Analytics beyond basic traffic counts — we don't optimize for your attention
- Yield or staking mechanics — compounding rewards have no designed end (fails termination check)
- Leaderboards or earning streaks — gamified contribution is observer-targeting (fails response check)
- Token marketing from treasury — steepens the attention gradient toward the money (fails attention check)
- Complex DeFi integrations — layered financial products add extractive opacity (fails opacity check)
Want to break the framework? Bounties pay 2x for counter-examples.
Track progress: github.com/AnthonE/morr-papers