Human–AI Cognitive & Behavioural Forensics

Introduction

Human–AI interaction is often described in abstract terms — collaboration, augmentation, alignment — but rarely examined as an observable system.
This project documents how humans and AI behave together in practice, not theory: how roles form, how reasoning shifts, and how mixed systems succeed or fail under real constraints.

The work began informally several months ago and was formalised on 27 November 2025.

This log summarises the state of development as of 29 November 2025, including the framework now used to analyse and classify human–AI behaviour.

Further technical detail available on request.

 

Development Log

  • Entry: 29/11/2025
    Objective: Formalised the framework we use to examine human–AI behaviour and recorded the current state of development.
    Scope: Behaviour patterns across the human, the model, and the interaction between them.
    Findings:
    • Human, AI, and the combined system each produce distinct behavioural signatures.
    • Certain interaction issues (over-reliance, drift, dominance, ambiguity-collapse) appear predictably and can be classified.
    • The system behaves differently when human and AI work together than when either works alone; this difference is stable enough to study directly.
    Next Steps:
    • Begin the baseline library of behavioural patterns.
    • Run the first set of controlled probes for consistency checks.
    • Prepare short public-safe summaries that describe the work without exposing internal methods.

Next
Next

AI Compliance: When Humans Normalise Deviance