Messages
Messages form the backbone of communication in the AG-UI protocol. They
represent the conversation history between users and AI agents, and provide a
standardized way to exchange information regardless of the underlying AI service
being used.
Message Structure
AG-UI messages follow a vendor-neutral format, ensuring compatibility across
different AI providers while maintaining a consistent structure. This allows
applications to switch between AI services (like OpenAI, Anthropic, or custom
models) without changing the client-side implementation.
The basic message structure includes:
interface BaseMessage {
id: string // Unique identifier for the message
role: string // The role of the sender (user, assistant, system, tool)
content?: string // Optional text content of the message
name?: string // Optional name of the sender
}
Message Types
AG-UI supports several message types to accommodate different participants in a
conversation:
User Messages
Messages from the end user to the agent:
interface UserMessage {
id: string
role: "user"
content: string // Text input from the user
name?: string // Optional user identifier
}
Assistant Messages
Messages from the AI assistant to the user:
interface AssistantMessage {
id: string
role: "assistant"
content?: string // Text response from the assistant (optional if using tool calls)
name?: string // Optional assistant identifier
toolCalls?: ToolCall[] // Optional tool calls made by the assistant
}
System Messages
Instructions or context provided to the agent:
interface SystemMessage {
id: string
role: "system"
content: string // Instructions or context for the agent
name?: string // Optional identifier
}
Results from tool executions:
interface ToolMessage {
id: string
role: "tool"
content: string // Result from the tool execution
toolCallId: string // ID of the tool call this message responds to
}
Developer Messages
Internal messages used for development or debugging:
interface DeveloperMessage {
id: string
role: "developer"
content: string
name?: string
}
Vendor Neutrality
AG-UI messages are designed to be vendor-neutral, meaning they can be easily
mapped to and from proprietary formats used by various AI providers:
// Example: Converting AG-UI messages to OpenAI format
const openaiMessages = agUiMessages
.filter((msg) => ["user", "system", "assistant"].includes(msg.role))
.map((msg) => ({
role: msg.role as "user" | "system" | "assistant",
content: msg.content || "",
// Map tool calls if present
...(msg.role === "assistant" && msg.toolCalls
? {
tool_calls: msg.toolCalls.map((tc) => ({
id: tc.id,
type: tc.type,
function: {
name: tc.function.name,
arguments: tc.function.arguments,
},
})),
}
: {}),
}))
This abstraction allows AG-UI to serve as a common interface regardless of the
underlying AI service.
Message Synchronization
Messages can be synchronized between client and server through two primary
mechanisms:
Complete Snapshots
The MESSAGES_SNAPSHOT
event provides a complete view of all messages in a
conversation:
interface MessagesSnapshotEvent {
type: EventType.MESSAGES_SNAPSHOT
messages: Message[] // Complete array of all messages
}
This is typically used:
- When initializing a conversation
- After connection interruptions
- When major state changes occur
- To ensure client-server synchronization
Streaming Messages
For real-time interactions, new messages can be streamed as they’re generated:
-
Start a message: Indicate a new message is being created
interface TextMessageStartEvent {
type: EventType.TEXT_MESSAGE_START
messageId: string
role: string
}
-
Stream content: Send content chunks as they become available
interface TextMessageContentEvent {
type: EventType.TEXT_MESSAGE_CONTENT
messageId: string
delta: string // Text chunk to append
}
-
End a message: Signal the message is complete
interface TextMessageEndEvent {
type: EventType.TEXT_MESSAGE_END
messageId: string
}
This streaming approach provides a responsive user experience with immediate
feedback.
AG-UI messages elegantly integrate tool usage, allowing agents to perform
actions and process their results:
Tool calls are embedded within assistant messages:
interface ToolCall {
id: string // Unique ID for this tool call
type: "function" // Type of tool call
function: {
name: string // Name of the function to call
arguments: string // JSON-encoded string of arguments
}
}
Example assistant message with tool calls:
{
id: "msg_123",
role: "assistant",
content: "I'll help you with that calculation.",
toolCalls: [
{
id: "call_456",
type: "function",
function: {
name: "calculate",
arguments: '{"expression": "24 * 7"}'
}
}
]
}
Results from tool executions are represented as tool messages:
{
id: "result_789",
role: "tool",
content: "168",
toolCallId: "call_456" // References the original tool call
}
This creates a clear chain of tool usage:
- Assistant requests a tool call
- Tool executes and returns a result
- Assistant can reference and respond to the result
Similar to text messages, tool calls can be streamed to provide real-time
visibility into the agent’s actions:
-
Start a tool call:
interface ToolCallStartEvent {
type: EventType.TOOL_CALL_START
toolCallId: string
toolCallName: string
parentMessageId?: string // Optional link to parent message
}
-
Stream arguments:
interface ToolCallArgsEvent {
type: EventType.TOOL_CALL_ARGS
toolCallId: string
delta: string // JSON fragment to append to arguments
}
-
End a tool call:
interface ToolCallEndEvent {
type: EventType.TOOL_CALL_END
toolCallId: string
}
This allows frontends to show tools being invoked progressively as the agent
constructs its reasoning.
Practical Example
Here’s a complete example of a conversation with tool usage:
// Conversation history
;[
// User query
{
id: "msg_1",
role: "user",
content: "What's the weather in New York?",
},
// Assistant response with tool call
{
id: "msg_2",
role: "assistant",
content: "Let me check the weather for you.",
toolCalls: [
{
id: "call_1",
type: "function",
function: {
name: "get_weather",
arguments: '{"location": "New York", "unit": "celsius"}',
},
},
],
},
// Tool result
{
id: "result_1",
role: "tool",
content:
'{"temperature": 22, "condition": "Partly Cloudy", "humidity": 65}',
toolCallId: "call_1",
},
// Assistant's final response using tool results
{
id: "msg_3",
role: "assistant",
content:
"The weather in New York is partly cloudy with a temperature of 22°C and 65% humidity.",
},
]
Conclusion
The message structure in AG-UI enables sophisticated conversational AI
experiences while maintaining vendor neutrality. By standardizing how messages
are represented, synchronized, and streamed, AG-UI provides a consistent way to
implement interactive human-agent communication regardless of the underlying AI
service.
This system supports everything from simple text exchanges to complex tool-based
workflows, all while optimizing for both real-time responsiveness and efficient
data transfer.
Responses are generated using AI and may contain mistakes.