§int
Integration · @namzu/ollama
Namzu × Ollama. Local model runner.
@namzu/ollama talks to a local Ollama daemon over the standard HTTP API. Nothing leaves the machine; the kernel's sandboxing and scheduling primitives still apply, just without a network hop.
01
01 · Install
$ pnpm add @namzu/sdk @namzu/ollama02
02 · Why pair the kernel with Ollama
The cleanest way to build local-first AI tooling. Ollama owns model weights and inference; Namzu owns the runtime around them — process boundaries, conversation persistence, tool orchestration. The combination ships well as a desktop or on-prem product.
- →Fully offline operation
- →Llama, Qwen, Gemma, Mistral, DeepSeek, Phi, and any GGUF-supported model
- →Same kernel API as a hosted provider — code is portable
- →Useful for development, redaction layers, and cost-sensitive workloads
03
03 · Example
import { createKernel } from '@namzu/sdk'
import { ollama } from '@namzu/ollama'
// Local-first. Nothing leaves the machine.
const kernel = createKernel({
provider: ollama({
baseUrl: 'http://localhost:11434',
model: 'llama3.2:8b',
}),
})Ship Ollama agents on a runtime you can own.
Ollama owns the model. Namzu owns the runtime. The provider package is a thin adapter, not a wrapper, so you keep everything Ollama ships and add the kernel concerns you do not get from the model API alone.