Coherence Taxonomy
If you want to study the coherence economy without getting lost in hype, you need a way to separate what is measured from what is claimed from what is changed.
Most confusion in this space happens when those layers blend together. A company says it "measures stress" when it actually measures heart rate variability and infers stress. A product claims to "optimize wellbeing" without specifying who decided what wellbeing means. A research paper treats a model output as ground truth.
This post introduces the taxonomy I'll use throughout this research project. It has two parts: the Coherence Stack (how a system works) and the Application Domains (where it applies). Together, they let me map any company, paper, or idea into a consistent structure.
Part 1: The Coherence Stack
To support coherence as choice architecture, a system needs a full loop: Measure → Interpret → Influence. The discipline is simple: if you cannot answer "what was directly measured?" in one sentence, you are already in inference.
Layer 1: Measure
Measurement is the sensor layer. It produces observable data. It does not produce meaning.
- Physiological signals: Heart rate, heart rate variability, respiration, temperature, sleep stage estimates, electrodermal activity. These reflect autonomic dynamics and load—but they are context-dependent and confounded. They are proxies, not direct reads of emotion or meaning.
- Behavioral signals: Movement patterns, interaction tempo, approach and avoidance, dwell time, return frequency. Behavior often expresses internal state, but it can also be strategic, culturally shaped, or constrained by circumstance.
- Affective signals: Voice tone and pacing, language patterns, stress signatures in speech, facial expression where appropriate. These can leak internal state, but they vary widely by person and setting and are easy to misread without calibration.
- Context and environment signals: Time, location, calendar context, ambient noise and light, temperature, air quality, device usage. Context doesn't explain meaning, but it shapes how the same signal should be interpreted.
One note: longitudinal evaluation is not a fifth modality. It's how you learn whether any of this matters. Without it, the stack becomes a dashboard that feels insightful but cannot demonstrate downstream impact.
Layer 2: Interpret
Interpretation maps measured proxies plus context into probabilistic hypotheses about internal state. This is where most overreach happens. The layer must earn trust, which means it cannot pretend to be certain.
- Uncertainty management: Confidence is not a footnote. It is part of the product. Interpretation should make it clear when multiple readings fit the data.
- Contextual integration: The same physiological signal can correspond to different states depending on sleep, exertion, caffeine, social setting, and temperature. Good interpretation controls for context rather than ignoring it.
- Subjective calibration: People are not averages. The system must learn a user's personal baseline and patterns over time. Population models can mislead individuals.
- Causal attribution: This is the bottleneck. Correlation is cheap. Causation is hard. Strong attribution usually requires experimental structure—even lightweight personal experiments. Claims that skip this step should be treated with suspicion.
I will treat interpretation claims carefully throughout this research. If a product claims it can read your emotions from a single signal with high accuracy in the wild, the burden of proof is on them.
Layer 3: Influence
Influence is where the system changes something. Highest leverage, highest risk.
- Reflection: Mirroring patterns back to the user so they can notice what was previously invisible. Descriptive, not prescriptive.
- Suggestion: Nudges the user can accept or ignore. "Consider a walk." "You might delay that meeting." The user retains choice.
- Adaptation: Automation—where the system changes the environment on the user's behalf, within explicit permissions. Adaptive lighting, automatic focus modes, schedule adjustments.
This is also where governance matters most. Influence can support agency or quietly replace it. The line between coaching and manipulation is thinner than people like to admit.
Your job: Ensure influence without explicit user-stated goals is identified as a red flag. If the system is optimizing for something, the user should know what it is.
Part 2: Application Domains
The same stack applies differently depending on context. Incentives, consent models, and failure modes shift by domain.
- Self: Personal energy, focus, emotional regulation, health. Today these tools are fragmented—sleep in one app, calendar in another, exercise in a third. The opportunity is integration. The risk is obsession, anxiety, and false certainty.
- Relationships: Teams, communities, partnerships. This domain is powerful and ethically radioactive. The only viable version is consent-based and user-controlled. If it becomes a tool for employers to score individuals, it will fail on trust, adoption, and regulation—regardless of technical merit.
- Things: Environments, experiences, consumption choices. Most recommendation systems optimize for what similar people clicked. A coherence-aware system would optimize for what actually works in your life. The product value could be enormous, but so is the data intimacy required.
How I'll Use This
When I write about a company, a paper, or an idea, I'll map it to this structure:
- Where does it sit in the stack?
- What is directly measured versus inferred?
- What influence does it propose, and who consents?
- What's the evidence quality?
- What could go wrong?
Quick Reference
- Measurement: What happened (signals, timestamps, observable data).
- Interpretation: What we think it means (with uncertainty).
- Influence: What we do about it (with consent).
- Values: What "better" means (chosen by the user, not assumed).
Brendan Marshall
Back to research