Drop into an existing chat
Drop into an existing chat
Add an AI Canvas to a chat surface you already shipped — in under one hour.
Overview
This walkthrough takes you from "I have an iOS chat app talking to Anthropic
or OpenAI" to "my assistant can manifest interactive UI inside my chat" with
a single SwiftUI view and a one-line Policy factory.
The host stays in charge. Your existing message stream, input field, brand,
typography, and model choice are unchanged. AINativeUI inserts a canvas into
the layout where you decide — above the messages, between them as inline
bubbles, in a bottom sheet — and runs alongside.
Estimated time: 30–60 minutes for a working integration.
What you'll build
A two-region chat screen:
the user sends a message.
- ·Top:
AICanvasView, the AI's interactive surface. Renders empty until - ·Bottom: your existing chat input. Untouched.
When the user sends "I'm bored", the assistant responds with a
suggestion cluster of three options. Tapping one manifests a working widget
— a 5-minute crossword, focus music controls, or a planning form — inside
the canvas.
Prerequisites
- ·iOS 17+ or macOS 14+ project
- ·Anthropic, OpenAI, or Apple Foundation Models access
- ·An existing SwiftUI view hierarchy where you want the canvas to live
Step 1 — Add the package products
In your Package.swift (or Xcode's package dependency UI), add five products:
.product(name: "AINativeUIAgent", package: "AINativeUI"),
.product(name: "AINativeUIAnthropic", package: "AINativeUI"), // or OpenAI / FoundationModels
.product(name: "AINativeUIWidgets", package: "AINativeUI"),
.product(name: "AINativeUIRecipes", package: "AINativeUI"),
.product(name: "AINativeUICore", package: "AINativeUI"),
The Render product is pulled in transitively.
Step 2 — Construct a Session with the canvas factory
The Policy.canvas(...) factories bundle every wedge-aligned default —
progressive streaming, the .companion style brief, the suggestion-cluster
system prompt section, the layout regularizer, the diagnostics stream. One
line per provider:
import AINativeUIAgent
import AINativeUIAnthropic
let session = Session(
policy: .canvas(anthropic: ProcessInfo.processInfo.environment["ANTHROPIC_API_KEY"]!)
)
OpenAI and Foundation Models work identically:
Session(policy: .canvas(openAI: apiKey))
Session(policy: .canvasOnDevice())
To customize anything beyond modelID and styleBrief, drop down to
Policy(modelProvider:...) directly.
API keys in production. Don't ship raw provider keys in a client app.
Either route requests through your backend (write a customModelProvider
that calls your API), or wait for the v2 hosted gateway.
Step 3 — Add AICanvasView to your view hierarchy
The single SwiftUI view that runs the canvas. Place it wherever fits your
chat layout. The Placement enum decides how it sizes:
import SwiftUI
import AINativeUIAgent
import AINativeUIWidgets
import AINativeUIRecipes
struct ChatScreen: View {
let session: Session
@State private var input = ""
var body: some View {
VStack(spacing: 0) {
AICanvasView(session: session, placement: .adaptive)
.aiBuiltinWidgets()
.aiBuiltinRecipes()
// Your existing chat input — unchanged.
HStack {
TextField("Say something…", text: $input)
Button("Send") {
let prompt = input
input = ""
Task { try await session.send(prompt) }
}
}
.padding()
}
}
}
Three placement options:
inside a container with its own height constraint.
region above a chat input.
an inline canvas that should expand with rich UI without pushing the chat
off-screen.
- ·
.inline(default): sized by the parent. Use when the canvas lives - ·
.fixed(height:): pinned to a CGFloat height. Use for a fixed canvas - ·
.adaptive: grows to fit content, capped at the available space. Use for
The two .aiBuiltin... modifiers wire the standard widget and recipe
registries so the AI can render every built-in. They're separate from the
canvas because their products live in their own SwiftPM modules — adopters
who don't want all 14 widgets can register a subset manually.
Step 4 — Send a message and watch
Run the app, type "I'm bored", and the assistant should reply with a
suggestion_cluster of three tappable chips. Tap one — the chosen
experience manifests in the canvas. The first time you see it work
end-to-end is the demo moment.
Where to go next
(dim the chat input while a widget runs, log to telemetry, swap an icon
on completion)
render alongside the built-ins
- ·
`02-RespondingToCanvasEvents` — let the host react to canvas activity - ·
`03-RegisteringDomainWidgets` — add app-specific widgets the AI can