Provider Onboarding Configuration Guide
This guide explains how to add a new LLM provider in two supported ways:
- interactive CLI (
pb auth config/pb auth add-provider) - direct file editing (
llm-config.json+credentials.json)
It is intended as a practical companion to:
docs/schemas/llm-config.schema.jsondocs/schemas/credentials.schema.json
TL;DR
- Provider metadata goes in
llm-config.json(providers, optionalmodels, optionalproviderAliases). - Provider credentials go in
credentials.json(providers.<providerId>). - For OpenAI-protocol providers, you can fetch model IDs from
/v1/modelsinpb auth configafter settingbaseUrl+apiKey.
Config File Paths
1~/.config/ponybunny/llm-config.json 2~/.config/ponybunny/credentials.json
Method A: Add Provider via pb auth config
Step 1: Open the interactive config wizard
1pb auth config
In the provider menu, choose:
+ Add provider (wizard)
Step 2: Fill provider metadata
The wizard asks for:
providerId(letters/numbers/-/_, must be unique)protocol(openai/anthropic/gemini/codex)type(apioroauth)baseUrl(optional, but typically required for custom endpoints)priority(lower = preferred)enabled(on/off)
These values are written to llm-config.json -> providers.<providerId>.
Step 3: Set credentials
For type=api, wizard also prompts API key (optional at creation time, can set later).
Credentials are written to:
credentials.json->providers.<providerId>
Step 4 (optional): Fetch models for OpenAI-protocol provider
Inside provider configuration in pb auth config, choose:
Fetch models from /v1/models
Requirements:
- provider
protocolmust beopenai apiKeymust be setbaseUrlmust be set
Selected model IDs are added to llm-config.json models as:
<providerId>.<modelId>
with default model metadata scaffold.
Method B: Add Provider by Editing Config Files Directly
Use this method when you want deterministic infra-managed changes.
1) Add provider metadata (llm-config.json)
1{ 2 "providers": { 3 "openai-compatible-local": { 4 "enabled": true, 5 "protocol": "openai", 6 "type": "api", 7 "baseUrl": "http://localhost:8000/v1", 8 "priority": 3 9 } 10 } 11}
2) Add credentials (credentials.json)
1{ 2 "providers": { 3 "openai-compatible-local": { 4 "apiKey": "local-dev-token", 5 "baseUrl": "http://localhost:8000/v1" 6 } 7 } 8}
3) (Optional) Add model entries (llm-config.json)
1{ 2 "models": { 3 "openai-compatible-local.qwen2.5-coder-32b": { 4 "displayName": "Qwen2.5 Coder 32B", 5 "costPer1kTokens": { "input": 0, "output": 0 }, 6 "capabilities": ["text", "function-calling"] 7 } 8 } 9}
4) (Optional) Add alias and tier usage
1{ 2 "providerAliases": { 3 "local-openai": { 4 "protocol": "openai", 5 "providers": ["openai-compatible-local"] 6 } 7 }, 8 "tiers": { 9 "medium": { 10 "primary": "openai-compatible-local.qwen2.5-coder-32b", 11 "fallback": ["openai.gpt-5.2"] 12 } 13 } 14}
/v1 URL Rules and Runtime Handling
When onboarding OpenAI-style providers (protocol = openai), URL composition is:
- resolved
baseUrl+ model endpoint path (for example/v1/responses)
Resolution Priority for baseUrl
Runtime resolves provider base URL in this order:
credentials.json->providers.<providerId>.baseUrlcredentials.json->providers.<providerId>.endpoint(mainly Azure)llm-config.json->providers.<providerId>.baseUrl
So if both files define baseUrl, credentials.json wins.
/v1 Segment Rules
For OpenAI-compatible APIs, final request path should contain exactly one version segment where required.
Valid patterns:
baseUrl = https://hostand endpoint path includes/v1/...baseUrl = https://host/v1and endpoint path omits/v1(for example/responses)
Avoid:
baseUrlwithout/v1+ endpoint without/v1(can produce versionless paths like/responses)
How System Handles Duplicates
PonyBunny runtime handles OpenAI-style URL composition to avoid duplicate /v1 segments when both base URL and endpoint include version prefix.
Practical recommendation:
- pick one consistent style per provider and keep it stable across
llm-config.jsonandcredentials.json.
Azure Note
Azure OpenAI uses deployment-style paths and api-version; do not force /v1 as a global rule for Azure endpoints.
Validation and Verification Checklist
After adding a provider:
- Ensure JSON is schema-valid (
llm-config.schema.json,credentials.schema.json). - Ensure provider is
enabled: trueinllm-config.json. - Ensure credentials exist for that same provider ID.
- If using custom OpenAI endpoint, verify
baseUrlincludes the expected version path for that backend. - Run a quick runtime check:
1pb status
Test Newly Added Provider and Models
After onboarding, use pb models commands to verify enablement, routing visibility, and endpoint health.
1) Confirm provider/model appears in runtime list
1pb models list
Check that:
- your provider is listed as enabled
- your model key (
<providerId>.<modelId>) appears in the catalog
2) Run a functional model invocation test
1pb models test --model <providerId>.<modelId>
Use this to verify end-to-end callability (credentials + baseUrl + model resolution).
3) Run provider/model health probe
1pb models probe
Use probe output to validate health/availability signals before relying on the provider in tiers/workloads.
Recommended sequence:
pb models listpb models test --model <providerId>.<modelId>pb models probe
Common Pitfalls
- Provider ID mismatch between
llm-config.jsonandcredentials.json. protocol=openaiprovider withoutbaseUrlwhen endpoint is not OpenAI default.- Model key typo: must follow
<providerId>.<modelId>naming. - Setting provider metadata but never adding credentials.