ainativeui

03Tutorial · 03

Register domain widgets

Registering domain widgets

Add app-specific widgets the AI can render alongside the 14 built-ins.

Overview

The 14 built-in widgets — counter, timer, chart, calendar, code editor,

crossword, and friends — cover most companion-style flows. But every

domain has its own shape: a fitness app needs a workout-set tracker, a

finance app needs a portfolio sparkline, a meditation app needs a

breath-pacer. AINativeUI is built for this: any host can ship a custom

widget the AI then knows how to render and react to.

This walkthrough adds a domain widget end-to-end: define the widget,

register it, surface it to the model via prompt docs, and verify the AI

emits it in response to a fitting prompt.

Prerequisites

and what events it fires

  • ·A working AICanvasView integration (see `01-DropIntoAnExistingChat`)
  • ·A clear idea of what your widget configures, what state it carries,

Step 1 — Author the widget

Conform to AIWidget. The minimum viable widget is ~30 lines:

import SwiftUI
import AINativeUICore
import AINativeUIRender
import AINativeUIWidgets

public struct WorkoutSetWidget: AIWidget {
    public static let typeName = "workout_set"
    public static let description = "Track sets for a single exercise — reps, weight, completed."
    public static let configurationDocumentation = """
    workout_set widget:
      configuration: {
        exercise: string,        // "Bench press"
        targetReps: int,         // 8
        weightLbs: int           // 135
      }
      state: {
        repsCompleted: int,
        completed: bool
      }
      events:
        repCompleted { rep },
        setCompleted
    """
    public static let defaultAccessibility = Accessibility(
        label: "Workout set tracker"
    )

    public struct Configuration: Codable, Sendable {
        public let exercise: String
        public let targetReps: Int
        public let weightLbs: Int
    }

    public struct State: Codable, Sendable {
        public var repsCompleted: Int
        public var completed: Bool
    }

    let configuration: Configuration
    let state: State?
    let session: WidgetSession

    public init(configuration: Configuration, state: State?, session: WidgetSession) {
        self.configuration = configuration
        self.state = state
        self.session = session
    }

    public var body: some View {
        VStack {
            Text(configuration.exercise).font(.title)
            Text("\(configuration.weightLbs) lb × \(configuration.targetReps) reps").font(.subheadline)
            // ...your UI for tapping reps, marking complete, etc.
        }
    }
}

The four metadata fields (typeName, description,

configurationDocumentation, defaultAccessibility) are what the AI

sees in the system prompt — make them precise. The AI can only call

your widget by name and only configure it with the fields you advertise.

Two structural rules:

carries them as JSON.

setWidgetState so the store version increments and the agent's next

snapshot reflects the change.

  • ·Configuration and State MUST be Codable & Sendable. The DSL
  • ·Mutate state via session.updateState(newState). This routes through
Macro upgrade. The @AIWidget macro generates the static
metadata from your doc comments — cuts boilerplate from ~50 lines to
~5. Available in AINativeUIWidgetMacros.

Step 2 — Register it

import AINativeUIWidgets

@main
struct MyAIApp: App {
    init() {
        StandardWidgetRegistry.shared.register(WorkoutSetWidget.self)
    }
    // ...
}

That's the renderer side. The widget now resolves when the AI emits a

UINode.widget with widgetType: "workout_set".

Step 3 — Surface to the model

The AI doesn't see StandardWidgetRegistry automatically — the system

prompt has a fixed catalog. To add your widget to the prompt, pass the

registry's documentation list to Policy:

let policy = Policy.canvas(anthropic: apiKey)
    .with { p in p.widgetDocs = StandardWidgetRegistry.shared.documentation }

Or with the full Policy(modelProvider:...) form:

let policy = Policy(
    modelProvider: AnthropicProvider(apiKey: apiKey),
    widgetDocs: StandardWidgetRegistry.shared.documentation,
    styleBrief: .companion
)

After this, the model's system prompt includes your widget's typeName,

description, and configuration shape. It now knows how to render

workout_set from a fitting user prompt.

Step 4 — Verify the AI emits it

Run the app, type "I'm doing bench press tonight, 4 sets of 8 at 135",

and the assistant should respond with a tree containing your widget.

If it doesn't emit your widget, the most common causes:

for, not just what it shows.

types in JSON-schema-ish prose.

your custom widget if the prompt is generic.

  • ·The description is vague — make it concrete about what the widget is
  • ·The configuration documentation lacks structure — show field names and
  • ·A more general widget fits better — the AI may pick checklist over

For nudge-level control, set the .companion brief's prefer array to

include "workout_set widget for tracking individual exercise sets".

Allowlists for governance

If you want to block a built-in widget — maybe map doesn't fit your

app's domain — set the allowlist on the policy:

var policy = Policy.canvas(anthropic: apiKey)
policy.allowedWidgets = .denied(["map", "qr_code"])

Allowlists filter the prompt content (so the model doesn't see disallowed

entries) AND validate at render time (so a misbehaving model can't

bypass the filter).

Where to go next

the same callback

reusable Swift package other AINativeUI apps can install

  • ·`01-DropIntoAnExistingChat` — for the canvas-side wiring
  • ·`02-RespondingToCanvasEvents` — your widget's events fire through
  • ·The marketplace docs in README.md for shipping your widget as a