AI Interface Rules
The rules governing how AI reasoning systems are represented and made interactive within Lucid interfaces. The primary rule is epistemic transparency: the AI system's reasoning position must be visible.
Where these rules are violated — where AI output is presented without reasoning context — the result is epistemic opacity. Opacity is not a minor UX failure. It is a structural failure that makes genuine engagement with AI reasoning impossible.
AI interface design has, to date, been predominantly concerned with output quality — producing responses that are accurate, fluent, and helpful. The interface layer has been treated as a delivery mechanism: its job is to present the AI's output clearly. The reasoning that produced the output is generally not the interface's concern.
The primary AI interface rule is that reasoning must be visible — not as a log or debug output, but as a structured epistemic context the user can engage with.
This rule follows directly from the interaction model. If interfaces are thinking environments, then AI reasoning systems operating within those environments cannot be black boxes. The user's ability to engage epistemically — to agree, disagree, probe, or extend — depends on being able to see where the AI system is in its reasoning. A response without visible reasoning context is not a contribution to a thinking environment; it is a statement from an oracle.
The transparency rule does not require exhaustive reasoning traces. It requires that the user can see the epistemic character of the output: what mode of reasoning produced it, what positions were held, what remains unresolved. This is the minimum condition for genuine engagement.
Five rules governing AI representation in Lucid interfaces. They are derived from the interaction model and the epistemic transparency requirement. Violation of any rule constitutes a degree of epistemic opacity.
The AI system's current epistemic position — what it is holding, exploring, or committing to — must be represented at the interface level. The user should never be in the position of receiving output without being able to see the reasoning context that produced it. This is not optional metadata; it is the primary interface requirement for any AI reasoning system operating within Lucid principles.
Whether the AI system is operating in divergent mode (exploring, holding multiple positions) or convergent mode (integrating, committing to a synthesis) must be made visible to the user. These modes have different epistemic characters and produce structurally different outputs. Treating them as interchangeable collapses a distinction that is fundamental to the user's ability to engage with the output epistemically.
Where the AI system holds uncertainty — about positions, relations, or conclusions — this uncertainty must be represented in structure, not just hedged in prose. Hedged language ("it seems", "possibly") is epistemic noise; structural representation of uncertainty (position not yet resolved, relation ambiguous, synthesis incomplete) gives the user something to reason about.
AI-generated output is not neutral content. It is the product of a specific reasoning process operating in a specific mode. The interface must contextualise output within the reasoning that produced it — making the inference visible, not just the conclusion. An AI interface that presents conclusions without reasoning context is structurally identical to an oracle: its output can be received but not reasoned with.
The rules above are not transparency requirements for their own sake — they exist to preserve the user's capacity for genuine epistemic engagement. An interface that makes AI reasoning visible is one in which the user can agree, disagree, probe, extend, or redirect. These are forms of epistemic agency. Interface rules that reduce opacity are rules that restore agency.
Epistemic opacity is the condition in which AI output is presented without reasoning context. The user receives a conclusion, a recommendation, or a generated artifact — but has no access to the reasoning that produced it, the mode from which it emerged, or the positions that were held and released in the process.
Epistemic opacity is not merely a UX problem — it is a structural failure. In an opaque interface, the user cannot engage epistemically with AI output. They can accept it or reject it, but they cannot reason with it: they cannot probe its assumptions, extend its positions, identify where it has collapsed divergent complexity into premature synthesis, or locate where it remains genuinely uncertain. The opacity forecloses epistemic engagement at the structural level, before any particular interaction begins.
Epistemic opacity reproduces oracle dynamics in AI interfaces: the system speaks, the user receives. This is the structural opposite of a thinking environment.
The opacity failure mode can occur at any level of interface sophistication. A technically complex AI system presented without epistemic transparency is no more engageable than a simple one. Capability does not generate transparency; only deliberate interface architecture does. The AI interface rules exist to prevent epistemic opacity from being the default state of AI-integrated interfaces.
Three reasoning modes, each with a distinct interface character. Mode signals give the user an immediate epistemic context for AI output — they see not just what the system says, but what kind of reasoning produced it.