§int
Integration · @namzu/http
Namzu × HTTP. Generic OpenAI-compatible endpoint.
@namzu/http is the escape hatch. Point it at any OpenAI-compatible HTTP endpoint — vLLM, Together, Fireworks, Groq, your own gateway, an OpenAI-compatible router — and Namzu treats it like a first-class provider.
01
01 · Install
$ pnpm add @namzu/sdk @namzu/http02
02 · Why pair the kernel with HTTP
The kernel's LLMProvider interface was deliberately small so that one HTTP-shaped provider could cover the long tail of vendors. If your stack needs a model that does not have a dedicated package today, @namzu/http almost certainly already supports it.
- →Works with vLLM, Together, Fireworks, Groq, DeepInfra, Anyscale, and any OpenAI-shape endpoint
- →Custom headers and per-request overrides
- →Useful for self-hosted inference servers behind a reverse proxy
- →Same kernel events and tool-call shape as every other provider
NoteNo first-party support for endpoints that diverge from the OpenAI shape. For those, write a 50-line provider against the LLMProvider interface — the SDK ships a starter template.
03
03 · Example
import { createKernel } from '@namzu/sdk'
import { http } from '@namzu/http'
// Point at any OpenAI-compatible endpoint — vLLM, Together, Fireworks,
// your own gateway, etc.
const kernel = createKernel({
provider: http({
baseUrl: 'https://your-gateway.example.com/v1',
apiKey: process.env.GATEWAY_KEY!,
model: 'whatever-it-serves',
}),
})Ship HTTP agents on a runtime you can own.
HTTP owns the model. Namzu owns the runtime. The provider package is a thin adapter, not a wrapper, so you keep everything HTTP ships and add the kernel concerns you do not get from the model API alone.