Signals
Signals are the inputs that help a service recognise a situation. They're not "all the data". They're the few pieces of evidence you trust enough to act on.
Why CX signals matter in adaptive customer experience
Most organisations have more customer data than they know what to do with. Data lakes grow. Dashboards multiply. Yet when a customer hits a moment of need — a failed delivery, an unexpected charge, an unclear status — the service often responds as if it knows nothing at all. The problem is not a lack of data. It is a lack of curated, trusted evidence that is connected to a decision.
In the Adaptive CX framework, a signal is not just a data point. It is a piece of evidence with defined confidence, freshness and coverage — trusted enough to change what the service does next. This distinction is what separates reactive customer experience from adaptive customer experience. Without it, AI and automation act on assumptions rather than evidence, producing confident responses that are often wrong.
What counts as a signal?
A signal is anything that changes the right decision in the moment.
Customer state
plan, tenure, payment status, access needs
Recent events
failed payment, service outage, delivery delay
Intent cues
repeated form attempts, search queries, high-friction paths
Operational context
capacity, stock, SLA, agent availability
Risk flags
vulnerability indicators, fraud risk, complaint history
Signal quality (the bit most teams skip)
Signals must have three properties. If you don't define these, your automation will act like it's confident when it isn't.
Confidence
How likely the signal is correct
Freshness
How long it stays valid
Coverage
How often it's available without gaps
Why signal quality matters more than signal volume
There is a persistent belief that more data leads to better decisions. In customer experience, the opposite is often true. Organisations drown in behavioural telemetry, CRM fields and event logs while starving for reliable, moment-relevant truth. The cost of acting on a stale or low-confidence signal is not abstract — it is a wrong recommendation, a misrouted call, a compensation offer that should never have been made.
Signal quality is what separates useful AI from expensive noise. When a service uses a delivery status that was last updated four hours ago to tell a customer "your order is on its way", it is not being helpful — it is guessing with confidence. The truth contract layer of the framework exists precisely to prevent this: it forces teams to define how fresh, how confident, and how complete each signal must be before it can drive a decision. Without that discipline, scaling AI in customer experience means scaling mistakes.
How many signals do you actually need?
Start with 3–7 reliable signals for one moment. You can always add more later. Early on, signal quality beats signal quantity every time.
Practical examples
Moment: Bill shock
Signals: Usage spike, tariff type, billing cycle date, previous usage baseline, support contact attempts.
Moment: Status uncertainty
Signals: Last known status, elapsed time since update, expected SLA, customer channel switching, previous failed attempts.
Signals across industries
Signal types vary by sector, but the quality principles remain constant. In telecoms, the most valuable signals tend to be usage patterns, network events and billing cycle proximity — evidence that a customer is about to experience bill shock or a service disruption. In financial services, signals often centre on transaction anomalies, product eligibility changes and regulatory triggers that affect what the service can offer. In retail and logistics, delivery status, stock availability and return history are the signals that shape moment-level decisions.
What unites these examples is not the data itself but the discipline applied to it. Every signal, regardless of industry, must pass the same quality test: is it confident enough, fresh enough and available often enough to drive a behaviour spec? If not, it remains a data point — useful for reporting, but not for real-time service decisions.
Common signal mistakes
- Treating a weak signal as truth ("they viewed the pricing page so they want to cancel")
- Using stale status as if it's live
- No plan for missing data (silence becomes "no problem")
- Not logging which signals drove the decision