Getting Started with AG-UI
AG-UI provides a concise, event-driven protocol that lets any agent stream rich,
structured output to any client. In this quick-start guide, we’ll walk through:
- Scaffolding a new AG-UI integration that wraps OpenAI’s GPT-4o model
- Registering your integration with the dojo, our local web playground
- Streaming responses from OpenAI through AG-UI’s unified interface
Prerequisites
Before we begin, make sure you have:
- Node.js v16 or later
- An OpenAI API key
1. Provide your OpenAI API key
First, let’s set up your API key:
# Set your OpenAI API key
export OPENAI_API_KEY=your-api-key-here
2. Install build utilities
Install the following tools:
curl -fsSL https://get.pnpm.io/install.sh | sh -
Step 1 – Scaffold your integration
Start by cloning the repo and navigating to the TypeScript SDK:
git clone git@github.com:ag-ui-protocol/ag-ui.git
cd ag-ui/typescript-sdk
Copy the middleware-starter template to create your OpenAI integration:
cp -r integrations/middleware-starter integrations/openai
Open integrations/openai/package.json
and update the fields to match your new
folder:
{
"name": "@ag-ui/openai",
"author": "Your Name <your-email@example.com>",
"version": "0.0.1",
... rest of package.json
}
Next, update the class name inside integrations/openai/src/index.ts
:
// change the name to OpenAIAgent
export class OpenAIAgent extends AbstractAgent {}
Finally, introduce your integration to the dojo by adding it to
apps/dojo/src/menu.ts
:
// ...
export const menuIntegrations: MenuIntegrationConfig[] = [
// ...
configureIntegration({
id: "openai",
name: "OpenAI",
features: ["agentic_chat"],
}),
]
And apps/dojo/src/agents.ts
:
// ...
import { OpenAIAgent } from "@ag-ui/openai"
export const agentsIntegrations: AgentIntegrationConfig[] = [
// ...
{
id: "openai",
agents: async () => {
return {
agentic_chat: new OpenAIAgent(),
}
},
},
]
Step 2 – Start the dojo
Now let’s see your work in action:
# Install dependencies
pnpm install
# Compile the project and run the dojo
turbo run dev
Head over to http://localhost:3000 and choose
OpenAI from the drop-down. You’ll see the stub agent replies with Hello
world! for now.
Here’s what’s happening with that stub agent:
// integrations/openai/src/index.ts
import {
AbstractAgent,
BaseEvent,
EventType,
RunAgentInput,
} from "@ag-ui/client"
import { Observable } from "rxjs"
export class OpenAIAgent extends AbstractAgent {
protected run(input: RunAgentInput): Observable<BaseEvent> {
const messageId = Date.now().toString()
return new Observable<BaseEvent>((observer) => {
observer.next({
type: EventType.RUN_STARTED,
threadId: input.threadId,
runId: input.runId,
} as any)
observer.next({
type: EventType.TEXT_MESSAGE_START,
messageId,
} as any)
observer.next({
type: EventType.TEXT_MESSAGE_CONTENT,
messageId,
delta: "Hello world!",
} as any)
observer.next({
type: EventType.TEXT_MESSAGE_END,
messageId,
} as any)
observer.next({
type: EventType.RUN_FINISHED,
threadId: input.threadId,
runId: input.runId,
} as any)
observer.complete()
})
}
}
Step 3 – Bridge OpenAI with AG-UI
Let’s transform our stub into a real agent that streams completions from OpenAI.
Install the OpenAI SDK
First, we need the OpenAI SDK:
cd integrations/openai
pnpm install openai
AG-UI recap
An AG-UI agent extends AbstractAgent
and emits a sequence of events to signal:
- lifecycle events (
RUN_STARTED
, RUN_FINISHED
, RUN_ERROR
)
- content events (
TEXT_MESSAGE_*
, TOOL_CALL_*
, and more)
Implement the streaming agent
Now we’ll transform our stub agent into a real OpenAI integration. The key
difference is that instead of sending a hardcoded “Hello world!” message, we’ll
connect to OpenAI’s API and stream the response back through AG-UI events.
The implementation follows the same event flow as our stub, but we’ll add the
OpenAI client initialization in the constructor and replace our mock response
with actual API calls. We’ll also handle tool calls if they’re present in the
response, making our agent fully capable of using functions when needed.
// integrations/openai/src/index.ts
import {
AbstractAgent,
RunAgentInput,
EventType,
BaseEvent,
} from "@ag-ui/client"
import { Observable } from "rxjs"
import { OpenAI } from "openai"
export class OpenAIAgent extends AbstractAgent {
private openai: OpenAI
constructor(openai?: OpenAI) {
super()
// Initialize OpenAI client - uses OPENAI_API_KEY from environment if not provided
this.openai = openai ?? new OpenAI()
}
protected run(input: RunAgentInput): Observable<BaseEvent> {
return new Observable<BaseEvent>((observer) => {
// Same as before - emit RUN_STARTED to begin
observer.next({
type: EventType.RUN_STARTED,
threadId: input.threadId,
runId: input.runId,
} as any)
// NEW: Instead of hardcoded response, call OpenAI's API
this.openai.chat.completions
.create({
model: "gpt-4o",
stream: true, // Enable streaming for real-time responses
// Convert AG-UI tools format to OpenAI's expected format
tools: input.tools.map((tool) => ({
type: "function",
function: {
name: tool.name,
description: tool.description,
parameters: tool.parameters,
},
})),
// Transform AG-UI messages to OpenAI's message format
messages: input.messages.map((message) => ({
role: message.role as any,
content: message.content ?? "",
// Include tool calls if this is an assistant message with tools
...(message.role === "assistant" && message.toolCalls
? {
tool_calls: message.toolCalls,
}
: {}),
// Include tool call ID if this is a tool result message
...(message.role === "tool"
? { tool_call_id: message.toolCallId }
: {}),
})),
})
.then(async (response) => {
const messageId = Date.now().toString()
// NEW: Stream each chunk from OpenAI's response
for await (const chunk of response) {
// Handle text content chunks
if (chunk.choices[0].delta.content) {
observer.next({
type: EventType.TEXT_MESSAGE_CHUNK, // Chunk events open and close messages automatically
messageId,
delta: chunk.choices[0].delta.content,
} as any)
}
// Handle tool call chunks (when the model wants to use a function)
else if (chunk.choices[0].delta.tool_calls) {
let toolCall = chunk.choices[0].delta.tool_calls[0]
observer.next({
type: EventType.TOOL_CALL_CHUNK,
toolCallId: toolCall.id,
toolCallName: toolCall.function?.name,
parentMessageId: messageId,
delta: toolCall.function?.arguments,
} as any)
}
}
// Same as before - emit RUN_FINISHED when complete
observer.next({
type: EventType.RUN_FINISHED,
threadId: input.threadId,
runId: input.runId,
} as any)
observer.complete()
})
// NEW: Handle errors from the API
.catch((error) => {
observer.next({
type: EventType.RUN_ERROR,
message: error.message,
} as any)
observer.error(error)
})
})
}
}
What happens under the hood?
Let’s break down what your agent is doing:
- Setup – We create an OpenAI client and emit
RUN_STARTED
- Request – We send the user’s messages to
chat.completions
with
stream: true
- Streaming – We forward each chunk as either
TEXT_MESSAGE_CHUNK
or
TOOL_CALL_CHUNK
- Finish – We emit
RUN_FINISHED
(or RUN_ERROR
if something goes wrong)
and complete the observable
Step 4 – Chat with your agent
Reload the dojo page and start typing. You’ll see GPT-4o streaming its answer in
real-time, word by word.
Bridging AG-UI to any protocol
The pattern you just implemented—translate inputs, forward streaming chunks,
emit AG-UI events—works for virtually any backend:
- REST or GraphQL APIs
- WebSockets
- IoT protocols such as MQTT
Connect your agent to a frontend
Tools like CopilotKit already understand AG-UI and
provide plug-and-play React components. Point them at your agent endpoint and
you get a full-featured chat UI out of the box.
Share your integration
Did you build a custom adapter that others could reuse? We welcome community
contributions!
- Fork the AG-UI repository
- Add your package under
integrations/
with docs and tests
- Open a pull request describing your use-case and design decisions
If you have questions, need feedback, or want to validate an idea first, start a
thread in the GitHub Discussions board:
AG-UI GitHub Discussions board.
Your integration might ship in the next release and help the entire AG-UI
ecosystem grow.
Conclusion
You now have a fully-functional AG-UI adapter for OpenAI and a local playground
to test it. From here you can:
- Add tool calls to enhance your agent
- Publish your integration to npm
- Bridge AG-UI to any other model or service
Happy building!