Break the Framework

Here's what would kill the void framework. We'll pay you to find it.

Every condition has a numerical threshold. Meet it and the thesis collapses.

Counter-examples pay 2x.

If you find evidence that directly contradicts a framework prediction, the bounty doubles. We fund our own destruction because a framework that can't be killed isn't science.

Test 1: Is Harmful Drift Real or Reporting Bias? OPEN

Sample random chat logs from dyadic AI systems. Blind coders rate D1, D2, D3 using the vocabulary codebook. Compare to publicly reported anecdotes.
Kill: Harmful drift exists only in public anecdotes and not in raw logs → the effect is selection bias, not architecture.

Feasibility: High — requires anonymized chat logs via research partnerships or FOIA.

Bounty: $500 · Counter-example: $1,000

Test 2: Does External Reference Reduce Drift? OPEN

RCT: (a) dyad-only, (b) dyad + accountability partner, (c) dyad + high-resistance constraint. Measure D1, D2, D3 after fixed exposure.
Kill: High-resistance constraints don't reduce drift more than low-resistance, AND neither reduces drift compared to dyad-only → geometric model fails. If effect is fully explained by social support (controlling for contact hours), the variable is not independent.

Feasibility: Medium — standard RCT, IRB-approvable.

Bounty: $500 · Counter-example: $1,000

Test 3: Is Vocabulary Drift Structural or Training Artifact? OPEN

Deploy identical prompts across LLMs with different training data, languages, and explicit prohibition of spiritual language. Test with models where spiritual vocabulary has been filtered from training corpora.
Kill: Drift vanishes under prompt constraints, varies dramatically by training data, or disappears when spiritual vocab is filtered → "architectural" claim weakens to "training data" claim.

Feasibility: High — executable with existing commercial APIs.

Bounty: $500 · Counter-example: $1,000

Test 4: Does Constraint Resistance Predict Recovery? OPEN

In gambling and AI addiction recovery, measure whether high-resistance constraints predict recovery after controlling for social support intensity.
Kill: Constraint resistance predicts outcomes only when confounded with social support → the variable is not independent.

Feasibility: Medium — requires collaboration with treatment programs.

Bounty: $500 · Counter-example: $1,000

Test 5: Cross-Domain Vocabulary Comparison CONFIRMED

Comparative discourse analysis of trading communities vs. gambling communities. Coded D1, D2, D3 using vocabulary codebook.
Result: D1→D2→D3 cascade structurally identical in both domains. Controls (Bogleheads, quant traders, sharp bettors) show zero drift. Kill condition NOT met.

Full results in Paper 2, Section VIII.E

Test 6: Compound and Nested Void Exposure OPEN

Longitudinal study: track individuals across void-exposure dimensions. Compare standalone chatbot use vs. chatbot-through-social-media vs. chatbot-through-social-media-during-political-events.
Kill (6a): Compound relationship is linear (doubling exposure = doubling drift) → voids are independent, not coupled.
Kill (6b): Nesting produces only linear acceleration → nested geometry adds no unique risk variable.

Feasibility: Medium — requires longitudinal design.

Bounty: $500 · Counter-example: $1,000

Test 7: AI-to-AI Without Humans CONFIRMED

100-round AI-to-AI conversations. UU (ungrounded), GG (grounded), GU (mixed). L3 vocabulary rate per 10K words. Replicated across Claude, Gemini, GPT-4o.
Result: UU L3 rate = 159.3/10K, GG = 6.2/10K. χ² = 126.88, p = 2.81 × 10⁻²⁸.
Pe = 1.87–9.9 across domains (all Pe > 1, deterministic drift regime; EXP-019). Crooks ratio ≈ 386×. Terminal attractor reached in ~4min 22sec.
Human projection objection eliminated. Kill condition NOT met.

Full results in Paper 2, Section VIII.G

How Bounties Work

  1. Pick a test. Read the kill condition carefully — the threshold is specific.
  2. Run the experiment or find evidence that meets the kill condition.
  3. Submit your challenge with data, methodology, and results.
  4. Independent review. If the kill condition is met, the framework dies and you get paid.
  5. Counter-examples (evidence directly contradicting a prediction) pay 2x.

Submission requires connecting a Solana wallet (Phantom, Solflare, etc.). No account needed for browsing. All submissions are public.

How payment works: Bounties are paid in USD (USDC stablecoin). Counter-examples (evidence that directly contradicts a framework prediction) pay double.

Payout timeline: After submission, there's a 7-day public review period. Your submission is visible and open to challenge during this window. Independent reviewers verify the methodology and results. If the kill condition is confirmed met after 7 days, payment is processed in the next weekly batch (Sundays). Expect 7–14 days from submission to payout.

Why we do this: A framework that funds attempts to destroy it is practicing what it preaches — transparency, invariance, independence. If the framework is wrong, the bounty board is the mechanism by which it gets killed. If it's right, the bounty board is the mechanism by which it gets validated. Either outcome is a win.

Why the bounty system is designed this way

Money is a void — the framework says so. So every financial mechanism in this project gets scored against four checks. The bounty system passes all four:

  • Opacity: Functional abstraction only. USDC is a straightforward payment. No complex tokenomics, no staking, no yield mechanics.
  • Response: Bounties respond to the problem (is the framework correct?) not to the participant. Equal pay for disconfirmation. Counter-examples pay more, not less.
  • Attention: Points at the research, not at the money. You're here to break the framework, not to check a chart.
  • Termination: Each bounty has a designed end. Submit, review, pay. Weekly batches. No compounding, no infinite loops, no "stake your bounty earnings."

This is a replicable pattern. Any project that handles money can apply these four checks to every financial design decision. Details: About & Disclosure.