How We Use AI

This isn't a disclaimer. It's a design pattern.

The Short Version

This project uses AI (Claude, Anthropic) as a core collaborator — for drafting papers, writing code, analyzing domains, and advising on decisions. A human (Anthony Eckert) maintains editorial authority over all claims, evidence evaluation, and strategic direction.

We're disclosing this because the framework we built says transparency is the first property of a valid constraint. If we hid how the work gets done, we'd be introducing opacity into a project about eliminating it. That would be a structural contradiction — and you could score us for it.

What AI Does Here

TaskRoleHuman Authority
Paper drafting AI drafts sections from outlines and evidence All claims reviewed, evidence verified, structure decided by author
Domain analysis AI applies the scoring framework to new domains Author selects domains, validates scores, checks for bias
Site code AI writes HTML, JS, CSS, backend routes Author reviews, tests, deploys
Advisory council Three AI advisors provide structured analysis (see below) Author synthesizes — council informs, never decides
Experiment design AI helps design protocols and analyze results Author sets hypotheses, reviews methodology, interprets results

The rule: AI is the instrument, never the authority. The human decides what questions to ask, what evidence to trust, and what to publish. The AI extends capacity — it doesn't replace judgment.

The Advisory Council Pattern

We built something we think others should copy: an AI advisory council where three constrained AI advisors each enforce one property of the constraint specification.

Science Advisor

Enforces transparency. Demands evidence, base rates, falsification conditions. Challenges vague claims. Asks: "Show me the data. What's the denominator?"

Faith Advisor

Enforces invariance. Holds the reference points. Detects drift in principles over time. Asks: "Has this changed? Why? Is the change justified or is it decay?"

Economics Advisor

Enforces independence. Maps incentives and coupling. Follows the money. Asks: "Who benefits from this conclusion? Where's the conflict of interest?"

Key design decisions

  • Adversarial by design. Each advisor must challenge the other two before output is finalized. Unanimous agreement triggers a skepticism check, not celebration.
  • Dissent is the signal. When one advisor resists a conclusion the other two support — and the pattern holds anyway — that's the strongest recommendation. The dissenter is the hostile witness.
  • No synthesis. The council produces three independent assessments plus cross-challenges. The human synthesizes. If the AI collapsed three perspectives into one recommendation, it would be doing the human's thinking — that's boundary erosion.
  • Observer protection. The council monitors whether the human is over-relying on it. If you stop challenging the council, the council challenges you.

Why This Is a Feature, Not a Confession

Most AI-assisted projects either hide their AI use or treat disclosure as damage control. We think that's backwards.

The void framework says that the difference between a destructive system and a productive one is geometry — transparency, invariance, independence. That applies to how you use AI, not just what AI does to you. If you constrain AI tools against the same specification you use to evaluate everything else, they become instruments, not oracles.

This project is evidence that the pattern works. Three papers, 90 domain analyses, 25 experiments, a scoring tool, and a 3D visualization — produced by one human with constrained AI collaborators. The constraint specification made the collaboration more productive, not less. The adversarial council caught errors that a single-perspective AI would have reinforced.

The Pattern (Copy This)

If you're using AI for research, decision-making, or creative work, here's the architecture we'd recommend:

  1. Define your constraint specification. What are your reference points? What doesn't change based on what the AI says? Write it down. Give it to the AI at the start of every session.
  2. Use multiple advisors, not one oracle. A single AI perspective is a single point of failure. Three advisors with different domain lenses and required cross-challenges produce better signal than one brilliant answer.
  3. Make dissent structural. Don't let advisors agree easily. Require challenges. Weight disagreement higher than consensus. Unanimous agreement is suspicious, not reassuring.
  4. Keep the human in the synthesis role. The AI informs. You decide. If you find yourself just accepting AI recommendations without challenge, that's drift — and your system should flag it.
  5. Monitor your own engagement. Track how often you override the council vs. accept it. Track whether your questions are getting lazier. Build the observer protection into the system.
  6. Disclose everything. If you can't explain how a conclusion was reached — including which parts were AI-generated — you've introduced opacity. The whole point is to eliminate that.

This isn't just how we built the project. It's the framework applied to itself. Score it. If the pattern breaks, that's a finding. Tell us on the bounty board.

Self-Score

We score the AI collaboration against the same constraint specification we apply to everything else:

PropertyStatusEvidence
Transparent Yes This page exists. AI use disclosed. Advisory pattern published. Papers on GitHub.
Invariant Yes Constraint specification doesn't change based on AI output. Kill conditions are pre-registered. Reference points are fixed.
Independent Yes You can verify every claim without AI. Clone the repo. Run the experiments. The research stands without the tools that built it.

If any of these stop being true, the constraint has failed. Score us. That's the point.