ainativeui

01Release history

Changelog

Changelog

All notable changes to AINativeUI. The format follows

Keep a Changelog; the project

adheres to Semantic Versioning.

[1.0.0] — 2026-05-07

The first stable release. AINativeUI is now the iOS SDK that lets any

AI mobile app's assistant escape the chat box — manifesting fully

wired, interactive UI inside its own host. See

[VISION.md](./VISION.md) for the doctrine, [README.md](./README.md)

for the quick-start, and the per-section notes below for what landed

in this release.

Doctrine

The "AI Canvas" framing is now the load-bearing positioning. Three

teeth, in priority order:

1. Wiring — every interactive element works. No orphan controls.

2. Conversation-led range — the AI proposes options before

manifesting; companion-style, not autonomous.

3. Native-host fitAICanvasView drops into any iOS chat

surface; theme inherits from the host; events flow back to the

host's session.

Empirically validated: Anthropic Opus 100% / Sonnet 100% / Haiku

97.9% on the 10-prompt fidelity corpus. The 2.1pp Haiku gap is

recipe-choice variance on explain_set_theory (Haiku picks

checklist_with_progress over narrative) — reasonable model

behavior, not a framework failure. See [recordings/](./recordings)

for the pinned baselines.

Added — Phase 1: Output fidelity substrate

…) synthesizes ActionRef(.emit, payload: { nodeID })` for any

control(.button) that arrives without a declared action. Orphan

taps are no longer possible at the framework level.

the AI's emitted tree before publishing. Three built-in rules:

- hstack-primary-spacer — injects a spacer before a trailing

primary button

- vstack-density-inheritance — sets density: regular on vstacks

of mixed text roles without explicit density

- display-text-divider — inserts a neutral divider between

adjacent display-role text siblings

Adopters disable rules by name via LayoutRegularizerPolicy.

.editorial, .none — the brief is now non-optional with

.companion as the default. The presets cross-reference real

recipes/widgets only; a corpus validation test catches phantom

references at build time.

classes of failure: controlsWithoutAction,

cardsWithoutInteractiveChild, orphanGestures,

displayTextTooDeep. Policy.diagnostics (.silent/.warnings/

.errors) controls surfacing; defaults to .warnings under DEBUG,

.silent under RELEASE.

  • ·Auto-injected button actions. `JSONDecoder.decode(UINode.self,
  • ·Layout regularizer. Pure post-order tree walk that normalizes
  • ·StyleBrief presets. .companion (default), .assistant,
  • ·Render diagnostics. RenderDiagnostics.scan(_:) flags four

Added — Phase 2: Suggestion choreography

optional dismiss action. The chip-authoring contract embeds the

prerendered manifest UINode directly in the chip's tap payload.

fires its tap gesture, the renderer reads the manifests payload

and replaces the cluster locally via apply_patch. Skips the

model round-trip; the agent receives a suggestion_picked event

after the visual transition completes. New rule: when responding

to suggestion_picked, the agent should apply_patch to enrich

the existing manifest rather than render_ui to replace it.

the AI to propose 3–5 options for open-ended user intent ("I'm

bored", "help me focus") before manifesting any single experience.

[Sources/AINativeUIScenarios](./Sources/AINativeUIScenarios):

boredomCrossword, planningTomorrow, expensesChart. Atlas

features all three on its welcome screen.

handling, focus auto-advance, direction toggling, word/puzzle

completion detection. Configuration: grid + clues. Events:

letterEntered, wordCompleted, puzzleCompleted.

least one control(button, role: .primary) to anchor the user's

eye. Tabs, chips, and code_editor Run buttons are secondary

affordances.

  • ·suggestion_cluster recipe. Renders a header + chips +
  • ·Local-apply on chip tap. When a suggestion_cluster chip
  • ·System prompt: suggestion-first doctrine. New section teaches
  • ·Three Companion reference scenarios in
  • ·Crossword widget. Self-driving puzzle widget with full input
  • ·Primary-action rule. Every non-trivial render must include at

Added — Phase 3: Drop-in adopter SDK

threads session.eventSink and session.controlCommitSink through

the environment, and exposes an optional onCanvasEvent callback

for host-side event observation. Three placement modes: .inline,

.fixed(height:), .adaptive.

three first-class providers:

`swift

Session(policy: .canvas(anthropic: apiKey))

Session(policy: .canvas(openAI: apiKey))

Session(policy: .canvasOnDevice())

`

Each bundles wedge-aligned defaults: progressive streaming, the

.companion brief, maxAgentIterations: 12, layout regularizer,

diagnostics.

App resources. Returns Rendering.native(UINode) for the trivial

HTML subset (h1–h3, p, button, img, hr) or

Rendering.webview(WebviewContent) for everything else. Ships an

MCPAppWebview SwiftUI host with sandboxed WKWebView defaults

for the webview path.

from Atlas into a shared library — Atlas, the CLI, and adopter

apps all reference the same source.

Sources/AINativeUICore/Documentation.docc/: drop into an

existing chat, respond to canvas events, register domain widgets.

  • ·AICanvasView. Single SwiftUI view that wraps AIRenderedView,
  • ·Policy.canvas(...) factories. One-line setup against the
  • ·AINativeUIMCPApps library. Inbound importer for SEP-1865 MCP
  • ·AINativeUIScenarios library. Curated demo scenarios extracted
  • ·DocC adopter walkthroughs. Three step-by-step tutorials in

Added — Phase 4.1: Fidelity eval harness

five-rubric AI Canvas fidelity rubric (wired 40% / coherent 25% /

on-style 15% / on-shape 15% / interactivity 5%). Composable with

recorded responses; runs without network in CI.

spanning all five categories (open-ended, specific request,

multi-step, mid-session reactive, long-form).

- ainativeui eval --corpus --responses — score a

set of pre-rendered AI responses against the corpus.

- ainativeui eval-record --provider --output

drive the corpus through a live ModelProvider and capture each

rendered tree to JSON. Pair with eval to score.

- ainativeui record-scenario --output — capture a

scripted scenario as a replayable SessionRecording.

Opus, Sonnet, Haiku, all running this release's prompt set.

[Examples/Companion/Recordings/](./Examples/Companion/Recordings):

the three Companion scenarios plus the three classic AINativeUI

demos (tic-tac-toe, Python tutor, sales dashboard). Replayable

without an API key.

  • ·AINativeUIEval library. Pure scoring functions for the
  • ·10-prompt corpus in [Tests/Eval/Fidelity/corpus.json](./Tests/Eval/Fidelity/corpus.json),
  • ·CLI subcommands:
  • ·Three pinned model recordings in [recordings/](./recordings):
  • ·Six pinned scenario recordings in

Added — Atlas polish

groups), replacing the previous hardcoded prompt suggestions that

bypassed Scenario.all.

DemoProvider; previously Scenario.all was dead code.

  • ·Welcome screen rewritten around the AI Canvas pitch.
  • ·Scenario cards drive the canonical demos (Companion + Classic
  • ·"Pick another demo" reset button on the canvas toolbar.
  • ·Companion scenarios now actually run through the scripted

Added — Performance baselines

Five regression-detector tests in PerformanceBenchmarks covering

the regularizer (10/100/300-node trees) and diagnostics scan, plus

a sub-linear scaling test (30→300 node ratio < 30×).

Changed

.companion). Adopters who want no brief pass .none. Migration:

source-compatible for adopters who omitted the parameter; passing

an explicit nil is now a compile error.

(previously the default 8) to give Sonnet-class models room to

converge on dashboard-density renders. Stand-alone

Policy(modelProvider:) still defaults to 8.

prompt now explicitly say "Slots: NONE" to head off the validation-

retry loop where the model assumed slots existed.

start moved to a "Lower-level integration" section.

library so the CLI can drive recording without depending on Atlas.

  • ·Policy.styleBrief is now non-optional (StyleBrief, default
  • ·Policy.canvas(...) factories use maxAgentIterations: 12
  • ·The chip, badge, avatar recipe doc strings in the system
  • ·README rewritten around the AI Canvas pitch. The previous quick-
  • ·Atlas's Scenario type extracted to the new AINativeUIScenarios

Fixed

buttons; auto-injection guarantees every button carries a payload

the agent can interpret.

via Atlas's welcome screen.

rules now explicitly prefer code_editor so users can tinker

with example code.

  • ·Validator no longer emits orphan tap events for action-less
  • ·The previously dead-code Scenario.all array now reaches the user
  • ·The code_editor vs code_block choice for tutorials: prompt

Deferred

speculative skeletons, ghost frames. Tier 1 of the post-Canvas

roadmap.

to ~5. Tier 1 of the post-Canvas roadmap.

registry repo / showcase site (marketplace.ainativeui.org) is

v1.1 work.

routing, telemetry. v2.

  • ·Streaming polish for the renderer — partial-JSON parsing,
  • ·@AIWidget macro for cutting widget boilerplate from ~50 lines
  • ·Marketplace — registry + CLI commands shipped, but the public
  • ·Hosted gateway — server-side prompt caching, optimised model

Known limitations

(h1–h3, p, button, img, hr). Real-world MCP App resources mostly

fall through to Rendering.webview(...).

StyleBrief.prefer; LLM-judged scoring is a v1.1 enhancement.

is v1.1.

support it) but the agent loop's progressive-application path is

v1.1.

  • ·The MCPApps HTMLConverter supports a deliberately narrow subset
  • ·The eval onStyle rubric is a heuristic substring match against
  • ·The fidelity corpus is 10 prompts; CANVAS-PLAN's 50-prompt target
  • ·Streaming is wired for Anthropic and OpenAI (the providers that

Acknowledgements

it, this design wouldn't be economical.

identity preservation.

early prototypes.

  • ·Anthropic's cache_control makes per-turn cost tractable. Without
  • ·Apple's @Observable macro (iOS 17+) underpins the renderer's
  • ·Swift 6's strict concurrency caught dozens of latent races in