Skip to main content

AI Provider Guides

Complete setup guides for all supported AI providers.


🆓 Free Tier Providers

Start with zero cost using these free-tier options:

Hugging Face

100,000+ open-source models

  • ✅ Free inference API
  • 🌍 Largest model collection
  • 🔓 Fully open source
  • 📊 Models by task: chat, classification, NER, summarization

Setup Guide →

Google AI Studio

Gemini models with generous free tier

  • ✅ 1,500 requests/day free
  • ⚡ Fast Gemini 2.0 Flash
  • 🎯 15 requests/minute
  • 💰 Pay-as-you-go option

Setup Guide →


🤖 Direct AI Providers

Access leading AI models directly from their creators:

Anthropic

Claude models with API key or OAuth authentication

  • 🧠 Claude 4.5 Opus/Sonnet/Haiku, Claude 4.0 Opus/Sonnet
  • 🔐 API key or OAuth (Pro/Max subscription)
  • 💭 Extended thinking for deep reasoning
  • 📄 200K context window, multimodal support

Setup Guide →


🏢 Enterprise Providers

Production-grade providers for enterprise deployments:

Azure OpenAI

Enterprise AI with Microsoft Azure

  • 🔒 SOC2, HIPAA, ISO 27001 compliant
  • 🌍 Multi-region deployment (30+ regions)
  • 🛡️ Private endpoints with VNet
  • 💼 Enterprise SLAs

Setup Guide →

Google Vertex AI

Google Cloud ML platform

  • ☁️ GCP integration
  • 🔐 IAM, VPC, service accounts
  • 🌏 Global deployment
  • 🎯 Gemini, PaLM, Codey models

Setup Guide →

AWS Bedrock

Serverless AI on AWS

  • 📦 13 foundation models (Claude, Llama, Mistral)
  • 🔐 IAM, VPC integration
  • 🌍 Multi-region (us-east-1, eu-west-1, ap-southeast-1)
  • 💰 Pay-per-use pricing

Setup Guide →


🌍 Compliance-Focused

Providers with specific compliance certifications:

Mistral AI

European AI with GDPR compliance

  • 🇪🇺 EU data residency
  • ✅ GDPR compliant by default
  • 🔓 Open source models
  • 💰 Cost-effective

Setup Guide →


🔌 Aggregators & Proxies

Access multiple providers through unified interfaces:

OpenRouter

300+ models from 60+ providers

  • 🌐 Single API for all major providers (Anthropic, OpenAI, Google, Meta, etc.)
  • ⚡ Automatic failover and routing
  • 💰 Competitive pricing with cost optimization
  • 🎯 Zero lock-in - switch models instantly
  • 📊 Usage tracking dashboard
  • 🆓 Free models available

Setup Guide →

OpenAI Compatible

OpenRouter, vLLM, LocalAI, and more

  • 🌐 100+ models through OpenRouter
  • 💻 Local deployment with vLLM
  • 🔓 Self-hosted with LocalAI
  • 🔄 Drop-in OpenAI replacement

Setup Guide →

LiteLLM

100+ providers through proxy

  • 🔄 Unified API for 100+ providers
  • 📊 Load balancing and fallbacks
  • 💰 Cost tracking
  • 🎯 Model routing

Setup Guide →


Quick Comparison

ProviderFree TierEnterpriseGDPRLatencyBest For
AnthropicLimitedLowReasoning, coding, Claude
Hugging FaceMediumOpen source, experimentation
Google AILowFree tier, Gemini
Mistral AILowEU compliance, cost
OpenRouterVariesLowMulti-model, automatic failover
OpenAI CompatibleVariesVariesVariesFlexibility, local deployment
LiteLLMVariesLowMulti-provider, unified API
Azure OpenAILowEnterprise, Microsoft ecosystem
Vertex AILowEnterprise, GCP ecosystem
AWS BedrockLowEnterprise, AWS ecosystem

Setup Strategies

const ai = new NeuroLink({
providers: [
{
name: 'google-ai',
priority: 1,
config: { apiKey: process.env.GOOGLE_AI_KEY },
quotas: { daily: 1500 }
},
{
name: 'openai',
priority: 2,
config: { apiKey: process.env.OPENAI_API_KEY }
}
],
failoverConfig: { enabled: true, fallbackOnQuota: true }
});

const result = await ai.generate({
input: { text: "Hello world" }
});

Strategy 2: Multi-Region Enterprise

const ai = new NeuroLink({
providers: [
{
name: "azure-us",
region: "us-east",
config: {
/* Azure US */
},
},
{
name: "azure-eu",
region: "eu-west",
config: {
/* Azure EU */
},
},
{
name: "bedrock-us",
region: "us-east",
config: {
/* Bedrock US */
},
},
],
loadBalancing: "latency-based",
});

Strategy 3: GDPR Compliance

const ai = new NeuroLink({
providers: [
{
name: "mistral",
priority: 1,
config: { apiKey: process.env.MISTRAL_API_KEY },
},
{
name: "azure-eu",
priority: 2,
config: {
/* Azure EU region */
},
},
],
compliance: {
framework: "GDPR",
dataResidency: "EU",
},
});

Next Steps

  1. Choose a provider based on your requirements (free tier, compliance, region)
  2. Follow the setup guide to get your API key
  3. Configure NeuroLink with the provider
  4. Test the integration with a simple request
  5. Add failover for production reliability