Middleware
Connect to existing protocols, in process agents or custom solutions via AG-UI
Introduction
A middleware implementation allows you to translate existing protocols and applications to AG-UI events. This approach creates a bridge between your existing system and AG-UI, making it perfect for adding agent capabilities to current applications.
When to use a middleware implementation
Middleware is the flexible option. It allows you to translate existing protocols and applications to AG-UI events creating a bridge between your existing system and AG-UI.
Middleware is great for:
- Taking your existing protocol or API and translating it universally
- Working within the confines of an existing system or framework
- When you don’t have direct control over the agent framework or system
What you’ll build
In this guide, we’ll create a middleware agent that:
- Extends the
AbstractAgent
class - Connects to OpenAI’s GPT-4o model
- Translates OpenAI responses to AG-UI events
- Runs in-process with your application
This approach gives you maximum flexibility to integrate with existing codebases while maintaining the full power of the AG-UI protocol.
Let’s get started!
Prerequisites
Before we begin, make sure you have:
- Node.js v16 or later
- An OpenAI API key
1. Provide your OpenAI API key
First, let’s set up your API key:
2. Install build utilities
Install the following tools:
Step 1 – Scaffold your integration
Start by cloning the repo and navigating to the TypeScript SDK:
Copy the middleware-starter template to create your OpenAI integration:
Update metadata
Open integrations/openai/package.json
and update the fields to match your new
folder:
Next, update the class name inside integrations/openai/src/index.ts
:
Finally, introduce your integration to the dojo by adding it to
apps/dojo/src/menu.ts
:
And apps/dojo/src/agents.ts
:
Step 2 – Add package to dojo dependencies
Open apps/dojo/package.json
and add the package @ag-ui/openai
:
Step 3 – Start the dojo
Now let’s see your work in action:
Head over to http://localhost:3000 and choose OpenAI from the drop-down. You’ll see the stub agent replies with Hello world! for now.
Here’s what’s happening with that stub agent:
Step 4 – Bridge OpenAI with AG-UI
Let’s transform our stub into a real agent that streams completions from OpenAI.
Install the OpenAI SDK
First, we need the OpenAI SDK:
AG-UI recap
An AG-UI agent extends AbstractAgent
and emits a sequence of events to signal:
- lifecycle events (
RUN_STARTED
,RUN_FINISHED
,RUN_ERROR
) - content events (
TEXT_MESSAGE_*
,TOOL_CALL_*
, and more)
Implement the streaming agent
Now we’ll transform our stub agent into a real OpenAI integration. The key difference is that instead of sending a hardcoded “Hello world!” message, we’ll connect to OpenAI’s API and stream the response back through AG-UI events.
The implementation follows the same event flow as our stub, but we’ll add the OpenAI client initialization in the constructor and replace our mock response with actual API calls. We’ll also handle tool calls if they’re present in the response, making our agent fully capable of using functions when needed.
What happens under the hood?
Let’s break down what your agent is doing:
- Setup – We create an OpenAI client and emit
RUN_STARTED
- Request – We send the user’s messages to
chat.completions
withstream: true
- Streaming – We forward each chunk as either
TEXT_MESSAGE_CHUNK
orTOOL_CALL_CHUNK
- Finish – We emit
RUN_FINISHED
(orRUN_ERROR
if something goes wrong) and complete the observable
Step 4 – Chat with your agent
Reload the dojo page and start typing. You’ll see GPT-4o streaming its answer in real-time, word by word.
Bridging AG-UI to any protocol
The pattern you just implemented—translate inputs, forward streaming chunks, emit AG-UI events—works for virtually any backend:
- REST or GraphQL APIs
- WebSockets
- IoT protocols such as MQTT
Connect your agent to a frontend
Tools like CopilotKit already understand AG-UI and provide plug-and-play React components. Point them at your agent endpoint and you get a full-featured chat UI out of the box.
Share your integration
Did you build a custom adapter that others could reuse? We welcome community contributions!
- Fork the AG-UI repository
- Add your package under
typescript-sdk/integrations/
. See Contributing for more details and naming conventions. - Open a pull request describing your use-case and design decisions
If you have questions, need feedback, or want to validate an idea first, start a thread in the GitHub Discussions board: AG-UI GitHub Discussions board.
Your integration might ship in the next release and help the entire AG-UI ecosystem grow.
Conclusion
You now have a fully-functional AG-UI adapter for OpenAI and a local playground to test it. From here you can:
- Add tool calls to enhance your agent
- Publish your integration to npm
- Bridge AG-UI to any other model or service
Happy building!