§int
Integration · @namzu/openai
Namzu × OpenAI. GPT and o-series model API.
@namzu/openai targets the OpenAI Chat Completions and Responses APIs. Function calling is mapped to the kernel's tool primitive; streaming, JSON mode, and structured outputs all pass through.
01
01 · Install
$ pnpm add @namzu/sdk @namzu/openai02
02 · Why pair the kernel with OpenAI
Most TypeScript codebases already speak the OpenAI SDK shape. The Namzu provider keeps that familiarity and adds the runtime concerns — process isolation, scheduling, checkpoints — that the OpenAI SDK alone does not solve.
- →Function calling, parallel tool calls, and JSON mode
- →Streaming surfaced as kernel events
- →Vision and audio inputs
- →Drop-in for any service that exposes an OpenAI-compatible endpoint via @namzu/http
03
03 · Example
import { createKernel } from '@namzu/sdk'
import { openai } from '@namzu/openai'
const kernel = createKernel({
provider: openai({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-5.1',
}),
})
await kernel.spawn({ tools: [...] }).run('Plan tomorrow\'s release.')Ship OpenAI agents on a runtime you can own.
OpenAI owns the model. Namzu owns the runtime. The provider package is a thin adapter, not a wrapper, so you keep everything OpenAI ships and add the kernel concerns you do not get from the model API alone.