ResearchInteractionAI Interface Rules
Epistemic Transparency · Reasoning Visibility · Mode Signalling

AI Interface Rules

The rules governing how AI reasoning systems are represented and made interactive within Lucid interfaces. The primary rule is epistemic transparency: the AI system's reasoning position must be visible.

Where these rules are violated — where AI output is presented without reasoning context — the result is epistemic opacity. Opacity is not a minor UX failure. It is a structural failure that makes genuine engagement with AI reasoning impossible.

Position Within the Research Stack
FoundationsPhilosophical ground
TheoryCognitive architecture
Media GrammarStructural translation
InteractionInterface layer
Systems TheoryComputational infrastructure
Transparency as the Primary Rule

AI interface design has, to date, been predominantly concerned with output quality — producing responses that are accurate, fluent, and helpful. The interface layer has been treated as a delivery mechanism: its job is to present the AI's output clearly. The reasoning that produced the output is generally not the interface's concern.

The primary AI interface rule is that reasoning must be visible — not as a log or debug output, but as a structured epistemic context the user can engage with.

This rule follows directly from the interaction model. If interfaces are thinking environments, then AI reasoning systems operating within those environments cannot be black boxes. The user's ability to engage epistemically — to agree, disagree, probe, or extend — depends on being able to see where the AI system is in its reasoning. A response without visible reasoning context is not a contribution to a thinking environment; it is a statement from an oracle.

The transparency rule does not require exhaustive reasoning traces. It requires that the user can see the epistemic character of the output: what mode of reasoning produced it, what positions were held, what remains unresolved. This is the minimum condition for genuine engagement.

The Rules

Five rules governing AI representation in Lucid interfaces. They are derived from the interaction model and the epistemic transparency requirement. Violation of any rule constitutes a degree of epistemic opacity.

01
Reasoning position must be visible

The AI system's current epistemic position — what it is holding, exploring, or committing to — must be represented at the interface level. The user should never be in the position of receiving output without being able to see the reasoning context that produced it. This is not optional metadata; it is the primary interface requirement for any AI reasoning system operating within Lucid principles.

02
Mode must be signalled explicitly

Whether the AI system is operating in divergent mode (exploring, holding multiple positions) or convergent mode (integrating, committing to a synthesis) must be made visible to the user. These modes have different epistemic characters and produce structurally different outputs. Treating them as interchangeable collapses a distinction that is fundamental to the user's ability to engage with the output epistemically.

03
Uncertainty must be structurally represented

Where the AI system holds uncertainty — about positions, relations, or conclusions — this uncertainty must be represented in structure, not just hedged in prose. Hedged language ("it seems", "possibly") is epistemic noise; structural representation of uncertainty (position not yet resolved, relation ambiguous, synthesis incomplete) gives the user something to reason about.

04
Output must be contextualised within reasoning

AI-generated output is not neutral content. It is the product of a specific reasoning process operating in a specific mode. The interface must contextualise output within the reasoning that produced it — making the inference visible, not just the conclusion. An AI interface that presents conclusions without reasoning context is structurally identical to an oracle: its output can be received but not reasoned with.

05
User epistemic agency must be preserved

The rules above are not transparency requirements for their own sake — they exist to preserve the user's capacity for genuine epistemic engagement. An interface that makes AI reasoning visible is one in which the user can agree, disagree, probe, extend, or redirect. These are forms of epistemic agency. Interface rules that reduce opacity are rules that restore agency.

Epistemic Opacity — The Critical Failure Mode

Epistemic opacity is the condition in which AI output is presented without reasoning context. The user receives a conclusion, a recommendation, or a generated artifact — but has no access to the reasoning that produced it, the mode from which it emerged, or the positions that were held and released in the process.

Epistemic opacity is not merely a UX problem — it is a structural failure. In an opaque interface, the user cannot engage epistemically with AI output. They can accept it or reject it, but they cannot reason with it: they cannot probe its assumptions, extend its positions, identify where it has collapsed divergent complexity into premature synthesis, or locate where it remains genuinely uncertain. The opacity forecloses epistemic engagement at the structural level, before any particular interaction begins.

Epistemic opacity reproduces oracle dynamics in AI interfaces: the system speaks, the user receives. This is the structural opposite of a thinking environment.

The opacity failure mode can occur at any level of interface sophistication. A technically complex AI system presented without epistemic transparency is no more engageable than a simple one. Capability does not generate transparency; only deliberate interface architecture does. The AI interface rules exist to prevent epistemic opacity from being the default state of AI-integrated interfaces.

Mode Signalling

Three reasoning modes, each with a distinct interface character. Mode signals give the user an immediate epistemic context for AI output — they see not just what the system says, but what kind of reasoning produced it.

Divergent Mode
Exploring, field-opening, multi-positional

Multiple positions held simultaneously; no convergence signal; field structures dominant; tension made visible without resolution pressure

Failure mode

Premature narrowing — the interface shows convergence signals while the reasoning is still in divergent mode

Convergent Mode
Integrating, synthesising, commitment-building

Synthesis in progress; positions reducing; structural integration visible; comparative structures prominent

Failure mode

Forced closure — the interface signals resolution before genuine synthesis has occurred

Navigational Mode
Traversing the field, moving between positions

Position within the epistemic field is explicit; trajectory is visible; sequence structures prominent; neither opening nor closing the field

Failure mode

Positional opacity — the user cannot tell where the AI system is in the reasoning arc

Structural Connections
The AI interface rules are derived from the interaction model — epistemic transparency is the interface expression of the thinking environment principle. Interaction Model
Mode signalling maps directly onto DCR — divergent and convergent modes correspond to the two phases of the reasoning cycle. Divergent-Convergent Reasoning
The grammar rules for representing epistemic states across media provide the underlying framework for mode-sensitive interface signals. Cross-Media Grammar
Interaction
Design System
UI Grammar
Interaction Model
Interface Structures
AI Interface Rules
Web Patterns
← Interaction