Skip to main content

00 · Architecture & Patterns

This doc captures the patterns a new provider must follow. Read once; the per-provider docs assume this knowledge.

Pattern 1: Factory + Registry (dynamic imports only)

Every provider is registered in src/lib/factories/providerRegistry.ts inside ProviderRegistry._doRegister() via:

ProviderFactory.registerProvider(
AIProviderName.<NAME>,
async (modelName?, _providerName?, sdk?, _region?, credentials?) => {
const creds = credentials as NeurolinkCredentials["<key>"];
const { <Provider>Provider } = await import("../providers/<file>.js");
return new <Provider>Provider(modelName, sdk as NeuroLink | undefined, undefined, creds);
},
<Models>.DEFAULT_OR_FROM_ENV,
["alias1", "alias2"],
);

Critical (CLAUDE.md rule #1): the import inside the factory must be dynamic (await import(...)). Static imports create circular dependencies because providers transitively import from the registry's siblings.

Pattern 2: BaseProvider contract

src/lib/core/baseProvider.ts:59 defines abstract class BaseProvider implements AIProvider. A new provider:

Required overrides (5 abstract methods)

MethodLineSignaturePurpose
executeStream1203protected abstract executeStream(options: StreamOptions, analysisSchema?: ValidationSchema): Promise<StreamResult>The streaming hot path. Use streamText({ model, messages, tools, ... }) from the AI SDK.
getProviderName1211protected abstract getProviderName(): AIProviderNameReturn the enum value.
getDefaultModel1216protected abstract getDefaultModel(): stringReturn env-or-hardcoded default.
getAISDKModel1222protected abstract getAISDKModel(): LanguageModel | Promise<LanguageModel>Build the AI SDK model instance.
formatProviderError1376protected abstract formatProviderError(error: unknown): ErrorMap upstream errors to user-friendly messages. Must return, never throw (CLAUDE.md rule #6).

Constructor signature

constructor(
modelName?: string,
sdk?: unknown, // NeuroLink instance (cast inside)
_region?: string, // ignored for non-AWS providers
credentials?: NeurolinkCredentials["<key>"]
) {
super(
modelName,
"<provider-name>" as AIProviderName, // string literal matching the enum value
sdk as NeuroLink | undefined,
);
// ...build SDK client using credentials || env...
}

Inherited helpers (use, don't override)

HelperLineUse case
validateStreamOptions(options)1492Throws on invalid options
buildMessagesForStream(options)576Constructs the ModelMessage[] array (handles multimodal files)
getAllTools()1341Returns all merged tools (MCP + built-in + custom)
getAISDKModelWithMiddleware(options)1229Wraps getAISDKModel() with middleware (use this in executeStream)
telemetryHandler.getTelemetryConfig(options, "stream")constructorOTel telemetry config for streamText
getToolCallRepairFn(options)1181Schema-driven tool-call repair
handleToolExecutionStorage(...)1944Persists tool I/O to memory
createTextStream(result)1499Adapts streamText result → Neurolink stream contract
getTimeout(options)1937Resolves per-call/instance/default timeout
supportsTools()157Default true; override to false for vision-only/embedding-only models
handleProviderError(error)1384Wraps formatProviderError + common-error handling
setSessionContext(sessionId, userId)1365Public; called by NeuroLink instance

Optional overrides

MethodLinePurpose
supportsTools()157Default true; override to false if your provider can't call functions
getDefaultEmbeddingModel()1166Return embed model name; undefined means embeddings unsupported
validateConfiguration()Public method; usually checks env vars and returns boolean
getConfiguration()Public method; returns { provider, model, defaultModel }

What NOT to override

  • stream() — base class implements the full lifecycle; only override executeStream
  • generate() / gen() — base class delegates to executeStream with synthetic chunking when tools present
  • embed(), embedMany() — base class throws "not supported"; override only if your SDK exposes them

Pattern 3: providerConfig helpers

src/lib/utils/providerConfig.ts exports:

  • validateApiKey(config: ProviderConfigOptions): string — throws if env var missing
  • getProviderModel(envVar, defaultModel)process.env[envVar] || defaultModel
  • hasProviderCredentials(envVars: string[]) — true if any env var set
  • createMistralConfig(), createOpenAIConfig(), … — return ProviderConfigOptions:
    type ProviderConfigOptions = {
    providerName: string;
    envVarName: string;
    setupUrl: string;
    description: string;
    instructions: string[];
    fallbackEnvVars?: string[];
    // Set true when `envVarName` is a base URL with a working default
    // (LM Studio, llama.cpp). Marks the env value as not-required so
    // validateApiKey()/validateApiKeyEnhanced() return "" instead of
    // throwing when it's unset.
    optional?: boolean;
    };

You must add one helper per new provider here. Mirror createMistralConfig() at line 322:

export function createDeepSeekConfig(): ProviderConfigOptions {
return {
providerName: "DeepSeek",
envVarName: "DEEPSEEK_API_KEY",
setupUrl: "https://platform.deepseek.com/api_keys",
description: "API key",
instructions: [
"1. Visit: https://platform.deepseek.com/api_keys",
"2. Create or sign in",
"3. Generate a new API key",
],
};
}

For local providers (LM Studio, llama.cpp), the helper still exists but the env var is the base URL, not the API key:

export function createLmStudioConfig(): ProviderConfigOptions {
return {
providerName: "LM Studio",
envVarName: "LM_STUDIO_BASE_URL",
setupUrl: "https://lmstudio.ai/",
description: "LM Studio server URL",
instructions: [
"1. Install LM Studio: https://lmstudio.ai/",
"2. Load a model in the LM Studio app",
"3. Start the local server (default: http://localhost:1234/v1)",
"4. Set LM_STUDIO_BASE_URL if you use a non-default port",
],
// Base URL is optional — defaults to http://localhost:1234/v1.
optional: true,
};
}

(See 01-shared-changes.md §3 for all four helper bodies.)

Pattern 4: AI SDK wrapping

Every cloud OpenAI-compat provider ultimately calls streamText({ model, ... }) where model is built with the AI SDK:

import { createOpenAI } from "@ai-sdk/openai";
const client = createOpenAI({
baseURL: this.config.baseURL,
apiKey: this.config.apiKey,
fetch: createProxyFetch(), // Neurolink-specific corp-proxy support
});
// .chat() targets /v1/chat/completions. Calling client(modelId) directly
// targets the Responses API, which OpenAI-compatible providers don't expose.
this.model = client.chat(this.modelName);

This works because @ai-sdk/openai's createOpenAI({ baseURL }) accepts ANY OpenAI-compatible endpoint (LM Studio, llama.cpp, NVIDIA NIM, DeepSeek, OpenRouter, vLLM, etc.).

For provider-specific extra body params (NVIDIA NIM has many), use:

const result = await streamText({
model,
messages,
tools,
// ... standard options ...
providerOptions: {
openai: {
// arbitrary extra body fields go here, e.g.:
reasoning_effort: "high", // for o1-style models
// For NVIDIA NIM, see 03-nvidia-nim.md for the full extra-body strategy
},
},
});

Pattern 5: Per-call vs instance vs env credentials precedence

The factory threads credentials through:

NeuroLink.generate({ credentials: ... })   // per-call wins

NeuroLink constructor credentials // instance default

process.env.<PROVIDER>_API_KEY // fallback

In the provider constructor:

const apiKey = credentials?.apiKey ?? validateApiKey(createDeepSeekConfig());

The ?? does the precedence; validateApiKey throws a friendly error when env is also missing.

Pattern 6: Type-engineering rules (CLAUDE.md 7-13, ESLint-enforced)

RuleMeaning for us
7 — no interfaceUse type X = { ... }. Use & for extension, never extends.
8 — no "Types" suffix in src/lib/types/ filenamesWe add to providers.ts, not deepseekTypes.ts.
9 — globally-unique type namesPrefix exported types: DeepSeekModelInfo, not ModelInfo.
10 — types-barrel export * onlyDon't selectively re-export from src/lib/types/index.ts.
11 — no local types/ dirsDon't create src/lib/providers/<provider>/types/.
12 — no type re-exports from non-type filesProvider class files must not export type { X } from.
13 — barrel-only imports for internal typesInside providers/X.ts: import type { ... } from "../types/index.js" — never from "../types/providers.js".

These are all enforced by ESLint rules under eslint-rules/. Run pnpm run lint after edits.

Pattern 7: Multimodal vision capability map

src/lib/adapters/providerImageAdapter.ts:70 defines VISION_CAPABILITIES. Add an entry for each provider that supports vision:

"deepseek": { supportsImages: false, supportedFormats: [], maxImagesPerRequest: 0 },
"nvidia-nim": { supportsImages: true, supportedFormats: ["png","jpeg","webp","gif"], maxImagesPerRequest: 8 },
"lm-studio": { supportsImages: true, supportedFormats: ["png","jpeg","webp"], maxImagesPerRequest: 4 },
"llamacpp": { supportsImages: true, supportedFormats: ["png","jpeg"], maxImagesPerRequest: 4 },

Vision availability for local providers depends on the loaded model (LLaVA, Llama 3.2 Vision, etc.) — we mark the provider capable; runtime errors surface if the loaded model isn't vision-capable.

Pattern 8: CLI integration

src/cli/factories/commandFactory.ts requires three edits per new provider:

  1. Line 60provider.choices array (the --provider flag's allowed values).
  2. Line ~1794 — secondary choices array (used by another command).
  3. Line ~3870 — bash-completion compgen -W string.

For complex providers with their own subcommands (Ollama has OllamaCommandFactory, SageMaker has SagemakerCommandFactory), create src/cli/factories/<name>CommandFactory.ts. None of our four need subcommand factories — they're plain --provider <name> providers.

Pattern 9: Test integration

test/continuous-test-suite-providers.ts:73 defines ALL_PROVIDERS = [...] as const. Add the four new names. The all-provider loop in this file iterates and skips when env vars are absent.

test/continuous-test-suite-credentials.ts should get 4 new test blocks for per-call credential overrides.

The canonical coverage for the four new providers is the dedicated suite test/continuous-test-suite-new-providers.ts with the pnpm run test:new-providers script — it exercises the full feature surface (generate, stream, tools, structured output, reasoning, vision-where-supported, abort, timeout, per-call creds, telemetry, error formatting) per provider. test:providers (ALL_PROVIDERS) and test:credentials remain available for cross-provider checks but test:new-providers is the canonical entrypoint.

File-touch checklist (per provider)

FileCardinality
src/lib/providers/<name>.tsNEW — 1 per provider
src/lib/constants/enums.tsEDIT — add to AIProviderName + add <Name>Models enum
src/lib/types/providers.tsEDIT — extend NeurolinkCredentials
src/lib/factories/providerRegistry.tsEDIT — add registerProvider block
src/lib/providers/index.tsEDIT — add barrel export (1 line)
src/lib/utils/providerConfig.tsEDIT — add create<Name>Config() helper
src/lib/constants/contextWindows.tsEDIT — add <provider-name>: { ... } section to MODEL_CONTEXT_WINDOWS
src/lib/adapters/providerImageAdapter.tsEDIT — add to VISION_CAPABILITIES (line 70)
src/cli/factories/commandFactory.tsEDIT — 3 spots (provider choices, secondary, bash completion)
.env.exampleEDIT — append env var section
test/continuous-test-suite-providers.tsEDIT — extend ALL_PROVIDERS
test/continuous-test-suite-credentials.tsEDIT — add per-call credential test

Total: 1 new file, 11 edited files per provider. Edits are concentrated; many providers can share a single PR.