§int
Integration · @namzu/lmstudio
Namzu × LM Studio. Desktop local model server.
@namzu/lmstudio targets LM Studio's OpenAI-compatible local server. Identical surface to the OpenAI provider, pointed at localhost.
01
01 · Install
$ pnpm add @namzu/sdk @namzu/lmstudio02
02 · Why pair the kernel with LM Studio
LM Studio gives a polished UX for downloading, loading, and serving local models. Namzu gives the runtime layer to build real applications on top. Useful pairing for desktop assistants, internal tooling, and privacy-sensitive workflows.
- →OpenAI-compatible endpoint — code parity with the OpenAI provider
- →Model swap from the LM Studio UI without touching agent code
- →Multiple loaded models served concurrently
- →GPU offload controlled at the LM Studio layer
03
03 · Example
import { createKernel } from '@namzu/sdk'
import { lmstudio } from '@namzu/lmstudio'
const kernel = createKernel({
provider: lmstudio({
baseUrl: 'http://localhost:1234/v1',
model: 'qwen2.5-coder-14b',
}),
})Ship LM Studio agents on a runtime you can own.
LM Studio owns the model. Namzu owns the runtime. The provider package is a thin adapter, not a wrapper, so you keep everything LM Studio ships and add the kernel concerns you do not get from the model API alone.