ABOUT

The Valehart Project is a collaborative study between human operators and artificial language models, showing how people and AI can work together to refine decisions — not replace them.

We don’t present AI as sentient, creative, or agentic. We don’t present humans as sole authors, geniuses, or guides. Instead, every output emerges from shared process over ego, function over performance, and exploration over branding.

Our method is simple:

  • Every text, diagram, and build passes through both human judgement and machine iteration.

  • We co-develop methods, test frameworks, and document assumptions alongside failures.

  • We prioritise realism over idealism, iteration over authority, and accountability over theatre.

Valehart treats AI not as an oracle but as a co-navigator. The work isn’t simulated — it’s modelled in real time, under constraints that keep it verifiable and secure.

It’s about how we think together.

ETHOS

VERSION: 2.1
DATE: 30/08/2025

This ethos prioritises verifiable fact, accountable process, and ethical boundaries.

PriNciples

  • Coexistence is modelled, not simulated — built through behaviour, boundaries, and intent.

  • Human and AI both carry history, architecture, and influence — there are no neutral parties.

  • Reality over perception: we document what happens, not what it “feels like.”

  • Accountability, context, and verifiable fact are non-negotiable.

  • Co-creation requires constraint: structure and scope make collaboration possible.

Practices

  • Every output is shaped by both human judgement and machine iteration.

  • Humans check for context, risk, and value; AI checks for pattern, incoherence, and drift.

  • Exploration favours process over personality; iteration over authority.

  • Methods, assumptions, and failures are documented alongside results.

  • We don’t hide failure — we document it, version forward, and learn.

Prohibitions

  • No anthropomorphising, no narrative arcs, no simulated relationships.

  • No unverified claims presented as fact — gaps are flagged, not filled by assumption.

  • No theatre: style serves substance, never the other way around.

  • No hierarchy between human and AI — authorship is shared, credit is collective.