NeuroLink Provider Configuration
NeuroLink supports 13 AI providers through a unified API. Configure providers via environment variables.
Supported Providers
| Provider | Enum Name | Aliases | Default Model |
|---|---|---|---|
| OpenAI | openai | gpt, chatgpt | gpt-4o |
| Anthropic | anthropic | claude | claude-3-5-sonnet-20241022 |
| Google AI Studio | google-ai | gemini, google | gemini-2.5-flash |
| Google Vertex AI | vertex | google-vertex | gemini-2.5-flash |
| AWS Bedrock | bedrock | aws-bedrock | anthropic.claude-3-sonnet-20240229-v1:0 |
| Azure OpenAI | azure-openai | azure | gpt-4o |
| Mistral AI | mistral | - | mistral-large |
| Ollama | ollama | - | llama3 |
| LiteLLM | litellm | - | varies |
| AWS SageMaker | sagemaker | - | custom |
| Hugging Face | hugging-face | hf | varies |
| OpenRouter | openrouter | - | varies |
| Gateway | gateway | - | varies |
OpenAI
# Environment
OPENAI_API_KEY=sk-...
OPENAI_ORG_ID=org-... # Optional
OPENAI_BASE_URL=... # Optional, for proxies
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "openai",
model: "gpt-4o", // or gpt-4o-mini, gpt-4-turbo, o1, o1-mini
});
Available Models:
gpt-4o- Latest GPT-4 Omnigpt-4o-mini- Faster, cheapergpt-4-turbo- GPT-4 Turboo1- Reasoning modelo1-mini- Smaller reasoning model
Anthropic
# Environment
ANTHROPIC_API_KEY=sk-ant-...
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "anthropic",
model: "claude-3-5-sonnet-20241022",
});
Available Models:
claude-3-5-sonnet-20241022- Latest Sonnetclaude-3-7-sonnet-20250219- Claude 3.7 Sonnetclaude-3-opus-20240229- Most capableclaude-3-haiku-20240307- Fastest
Extended Thinking:
const result = await neurolink.generate({
input: { text: "Complex reasoning task" },
provider: "anthropic",
thinkingLevel: "high", // minimal, low, medium, high
});
Google AI Studio
# Environment
GOOGLE_API_KEY=...
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "google-ai",
model: "gemini-2.5-flash",
});
Available Models:
gemini-2.5-flash- Fast and capablegemini-2.5-pro- Most capablegemini-2.0-flash- Previous generationgemini-3-flash-preview- Preview of Gemini 3
Google Vertex AI
# Environment
VERTEX_PROJECT_ID=your-project-id
VERTEX_LOCATION=us-central1 # Optional
GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "vertex",
model: "gemini-3-flash",
});
Available Models:
gemini-3-flash- Latest Gemini 3gemini-3-pro- Most capable Gemini 3gemini-2.5-flash- Fastgemini-2.5-pro- Previous gen capable
Extended Thinking (Gemini 3):
const result = await neurolink.generate({
input: { text: "Complex reasoning task" },
provider: "vertex",
model: "gemini-3-flash",
thinkingLevel: "high", // minimal, low, medium, high
});
AWS Bedrock
# Environment
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1
# Or use AWS profiles
AWS_PROFILE=your-profile
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "bedrock",
model: "anthropic.claude-3-sonnet-20240229-v1:0",
});
Available Models:
anthropic.claude-3-sonnet-20240229-v1:0anthropic.claude-3-haiku-20240307-v1:0anthropic.claude-3-opus-20240229-v1:0amazon.titan-text-express-v1amazon.nova-pro-v1:0meta.llama3-70b-instruct-v1:0
Azure OpenAI
# Environment
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
AZURE_OPENAI_API_VERSION=2024-02-15-preview # Optional
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "azure-openai",
model: "gpt-4o", // Your deployment name
});
Mistral AI
# Environment
MISTRAL_API_KEY=...
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "mistral",
model: "mistral-large-latest",
});
Available Models:
mistral-large-latest- Most capablemistral-small-latest- Fastcodestral-latest- Code specializedministral-8b-latest- Small
Ollama (Local)
# No environment variables needed
# Ensure Ollama is running: ollama serve
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "ollama",
model: "llama3",
});
Setup:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3
# Start server
ollama serve
Available Models:
llama3- Meta Llama 3llama3:70b- Larger Llama 3mistral- Mistral 7Bcodellama- Code specializedphi3- Microsoft Phi-3
LiteLLM
# Environment
LITELLM_API_KEY=...
LITELLM_API_BASE=https://your-litellm-proxy.com
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "litellm",
model: "gpt-4", // LiteLLM model format
});
AWS SageMaker
# Environment
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-west-2
SAGEMAKER_ENDPOINT_NAME=your-endpoint
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "sagemaker",
model: "your-endpoint-name",
});
Hugging Face
# Environment
HF_TOKEN=hf_...
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "hugging-face",
model: "meta-llama/Meta-Llama-3-8B-Instruct",
});
OpenRouter
# Environment
OPENROUTER_API_KEY=sk-or-...
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "openrouter",
model: "anthropic/claude-3-opus",
});
Provider Fallback
Configure automatic fallback to another provider:
import { createAIProviderWithFallback } from "@juspay/neurolink";
const { primary, fallback } = await createAIProviderWithFallback(
"openai", // Primary
"anthropic", // Fallback
"gpt-4o",
);
Check Provider Status
// Check all configured providers
const status = await neurolink.getProviderStatus();
console.log(status);
// { provider: 'openai', status: 'working', configured: true, authenticated: true }
// Get list of available providers
const available = await neurolink.getAvailableProviders();
console.log(available);
// ['openai', 'anthropic', 'vertex']
// Get health summary
const health = await neurolink.getProviderHealthSummary();
console.log(health);
Provider-Specific Options
Temperature and Sampling
const result = await neurolink.generate({
input: { text: "Hello" },
temperature: 0.7, // 0.0 - 2.0
topP: 0.9, // Nucleus sampling
topK: 40, // Top-K sampling
maxTokens: 1000,
presencePenalty: 0.1,
frequencyPenalty: 0.1,
});
System Prompts
const result = await neurolink.generate({
input: { text: "Write code for me" },
systemPrompt:
"You are an expert TypeScript developer. Always use proper types.",
});
Vision-Capable Models
Not all models support image inputs:
| Provider | Vision Models |
|---|---|
| OpenAI | gpt-4o, gpt-4-turbo |
| Anthropic | All Claude 3 models |
| Vertex | Gemini 2.5+, Gemini 3 |
| Google AI | Gemini 2.5+, Gemini 3 |
| Bedrock | Claude 3 models |
const result = await neurolink.generate({
input: {
text: "Describe this image",
images: ["./photo.jpg"],
},
provider: "openai",
model: "gpt-4o", // Must be vision-capable
});
Next Steps
- Multimodal inputs - Work with images and documents
- MCP tools - Add external tools
- RAG integration - Document-grounded generation