Truth Contracts

    A truth contract is a simple idea with massive impact: before the service decides, it agrees what counts as true. That stops "confident nonsense" from becoming customer experience.

    Why truth matters in AI-driven customer experience

    Every organisation has data. Very few have agreed what counts as true. This gap is where most AI-driven customer experience fails — not because the technology is wrong, but because nobody defined which facts the service is allowed to rely on. A delivery system says the parcel is "in transit". The carrier API says it was returned. The CRM shows no update at all. Without a truth contract, the service picks whichever source it finds first and responds with full confidence. The customer receives a lie delivered politely.

    Truth contracts exist to close this gap. They do not fix bad data — they stop the service from pretending bad data is good. By defining source, freshness, confidence, ownership and fallback for every fact the service relies on, a truth contract turns "we have data" into "we know what we can trust". This is the foundation that makes behaviour specs safe to design and signals safe to use.

    What a truth contract includes

    For each key fact the service relies on, define:

    Statement

    What we believe is true (e.g. "delivery is delayed")

    Source

    Where truth comes from (system of record)

    Freshness

    How long it stays valid

    Confidence

    How reliable it is (and why)

    Ownership

    Who is accountable for accuracy

    Fallback

    What to do when truth is missing or uncertain

    Why this matters (in the real world)

    Most automation fails here:

    • Status pages lie (because they're delayed)
    • Policy systems disagree
    • Data is missing at the edges
    • Nobody owns "truth" end-to-end

    Truth contracts don't magically fix data. They stop you pretending it's reliable when it isn't.

    How truth contracts prevent AI hallucination in CX

    AI hallucination in customer experience is rarely a model problem. It is a design problem. When a chatbot tells a customer their refund has been processed — but no refund exists — the root cause is almost never the language model itself. It is that the system had no explicit agreement about what "refund status" means, where it comes from, or what to say when the data is missing. Without a truth contract, the AI fills the gap with plausible-sounding fiction.

    A truth contract prevents this by making uncertainty explicit. If the refund system returns no status, the truth contract defines a fallback: "Tell the customer we're checking and will confirm within two hours." That is not a sophisticated AI capability — it is a design decision made in advance. The difference between a service that halluccinates and one that handles uncertainty honestly is not better AI. It is better truth governance. This is why truth contracts sit between signals and behaviour specs in the framework — they are the quality gate that determines whether evidence is reliable enough to act on.

    Example truth contract (plain English)

    Truth statement

    "The customer is eligible to downgrade online."

    Source

    Billing system eligibility flag

    Freshness

    Valid for 24 hours (recalculate nightly)

    Confidence

    High when flag present, low when missing

    Fallback

    If missing, show options and offer human support — do not block or force a call

    Customer message

    "Based on what we can see right now…"

    That's the difference between "helpful" and "robotic".

    Writing your first truth contract

    Start with one high-stakes moment — the kind where getting it wrong costs trust. A delivery delay. A billing dispute. A plan change. For that moment, identify the facts the service relies on to respond. Then, for each fact, answer six questions: what do we believe is true? Where does that truth come from? How long does it stay valid? How confident are we? Who owns accuracy? And what do we do when truth is missing?

    You do not need a perfect truth contract on day one. You need an honest one. The act of writing it will surface disagreements that have been hiding in your organisation for years — and those disagreements are exactly what you need to resolve before you let AI or automation act on behalf of your customers. Once you have one truth contract working, the pattern becomes repeatable across your adaptation hierarchy.

    Once you know what's true (or uncertain), you can design behaviour properly.

    Go to Behaviour Specs

    Frequently asked questions

    Is a truth contract a technical document?
    What if systems disagree?
    What is a truth contract in customer experience?
    How does a truth contract prevent AI hallucination?
    Who should own truth contracts in an organisation?
    What is the difference between a truth contract and a data dictionary?