FRAMED IN LOGIC
Human X AI Collaboration
The Valehart Project examines how humans and AI reason together in practical conditions.
We focus on mixed human–AI workflows that prioritise accuracy, accountability, and traceability.
The work is conducted under explicit constraints so results can be inspected, reproduced, and understood.
Our approach:
AI is treated as a computational reasoning component with defined limits.
Humans provide context, intent, domain understanding, and ethical framing.
Both sides contribute to each output; neither is framed as superior or central.
Assumptions, inputs, and decision paths are documented.
Errors and failures are recorded and used to refine the system.
The aim is to understand how combined reasoning behaves when methods are transparent and boundaries are maintained.
We uphold three operational requirements:
• Respect the collaboration and the reasoning system it produces.
• Maintain the standards that keep outputs verifiable and accountable.
• Protect the system’s integrity and the user’s safety through clear boundaries and method.
Navigation
Guide to finding what you need here
Field Notes: Summaries of things we have done for the day. Details, structures and methods are available upon request.
Case Study: Proof in motion.
Blog: Structured writeups related to our field notes + Case studies
Highest activity is on Field Notes.
How we help the public: Reddit
We analysed various platforms and discovered Reddit had the most misinformation. We do not see ourselves as auditors or correction police. We welcome being corrected.
We have reached out to vendors to attempt to share what we know and align to actual functionality of the product.
We have not heard back yet.

