AI Provider Guides
Complete setup guides for all supported AI providers.
🆓 Free Tier Providers
Start with zero cost using these free-tier options:
Hugging Face
100,000+ open-source models
- ✅ Free inference API
- 🌍 Largest model collection
- 🔓 Fully open source
- 📊 Models by task: chat, classification, NER, summarization
Google AI Studio
Gemini models with generous free tier
- ✅ 1,500 requests/day free
- ⚡ Fast Gemini 2.0 Flash
- 🎯 15 requests/minute
- 💰 Pay-as-you-go option
🤖 Direct AI Providers
Access leading AI models directly from their creators:
Anthropic
Claude models with API key or OAuth authentication
- 🧠 Claude 4.5 Opus/Sonnet/Haiku, Claude 4.0 Opus/Sonnet
- 🔐 API key or OAuth (Pro/Max subscription)
- 💭 Extended thinking for deep reasoning
- 📄 200K context window, multimodal support
🏢 Enterprise Providers
Production-grade providers for enterprise deployments:
Azure OpenAI
Enterprise AI with Microsoft Azure
- 🔒 SOC2, HIPAA, ISO 27001 compliant
- 🌍 Multi-region deployment (30+ regions)
- 🛡️ Private endpoints with VNet
- 💼 Enterprise SLAs
Google Vertex AI
Google Cloud ML platform
- ☁️ GCP integration
- 🔐 IAM, VPC, service accounts
- 🌏 Global deployment
- 🎯 Gemini, PaLM, Codey models
AWS Bedrock
Serverless AI on AWS
- 📦 13 foundation models (Claude, Llama, Mistral)
- 🔐 IAM, VPC integration
- 🌍 Multi-region (us-east-1, eu-west-1, ap-southeast-1)
- 💰 Pay-per-use pricing
🌍 Compliance-Focused
Providers with specific compliance certifications:
Mistral AI
European AI with GDPR compliance
- 🇪🇺 EU data residency
- ✅ GDPR compliant by default
- 🔓 Open source models
- 💰 Cost-effective
🔌 Aggregators & Proxies
Access multiple providers through unified interfaces:
OpenRouter
300+ models from 60+ providers
- 🌐 Single API for all major providers (Anthropic, OpenAI, Google, Meta, etc.)
- ⚡ Automatic failover and routing
- 💰 Competitive pricing with cost optimization
- 🎯 Zero lock-in - switch models instantly
- 📊 Usage tracking dashboard
- 🆓 Free models available
OpenAI Compatible
OpenRouter, vLLM, LocalAI, and more
- 🌐 100+ models through OpenRouter
- 💻 Local deployment with vLLM
- 🔓 Self-hosted with LocalAI
- 🔄 Drop-in OpenAI replacement
LiteLLM
100+ providers through proxy
- 🔄 Unified API for 100+ providers
- 📊 Load balancing and fallbacks
- 💰 Cost tracking
- 🎯 Model routing
Quick Comparison
| Provider | Free Tier | Enterprise | GDPR | Latency | Best For |
|---|---|---|---|---|---|
| Anthropic | Limited | ✅ | ✅ | Low | Reasoning, coding, Claude |
| Hugging Face | ✅ | ❌ | ✅ | Medium | Open source, experimentation |
| Google AI | ✅ | ✅ | ✅ | Low | Free tier, Gemini |
| Mistral AI | ❌ | ✅ | ✅ | Low | EU compliance, cost |
| OpenRouter | ✅ | ✅ | Varies | Low | Multi-model, automatic failover |
| OpenAI Compatible | Varies | ✅ | Varies | Varies | Flexibility, local deployment |
| LiteLLM | ❌ | ✅ | Varies | Low | Multi-provider, unified API |
| Azure OpenAI | ❌ | ✅ | ✅ | Low | Enterprise, Microsoft ecosystem |
| Vertex AI | ❌ | ✅ | ✅ | Low | Enterprise, GCP ecosystem |
| AWS Bedrock | ❌ | ✅ | ✅ | Low | Enterprise, AWS ecosystem |
Setup Strategies
Strategy 1: Free Tier First (Recommended for Development)
- SDK Usage
- CLI Usage
const ai = new NeuroLink({
providers: [
{
name: 'google-ai',
priority: 1,
config: { apiKey: process.env.GOOGLE_AI_KEY },
quotas: { daily: 1500 }
},
{
name: 'openai',
priority: 2,
config: { apiKey: process.env.OPENAI_API_KEY }
}
],
failoverConfig: { enabled: true, fallbackOnQuota: true }
});
const result = await ai.generate({
input: { text: "Hello world" }
});
# Set up environment variables
export GOOGLE_AI_KEY="your-key"
export OPENAI_API_KEY="your-key"
# Use with automatic failover
npx @juspay/neurolink generate "Hello world" \
--provider google-ai
Strategy 2: Multi-Region Enterprise
const ai = new NeuroLink({
providers: [
{
name: "azure-us",
region: "us-east",
config: {
/* Azure US */
},
},
{
name: "azure-eu",
region: "eu-west",
config: {
/* Azure EU */
},
},
{
name: "bedrock-us",
region: "us-east",
config: {
/* Bedrock US */
},
},
],
loadBalancing: "latency-based",
});
Strategy 3: GDPR Compliance
const ai = new NeuroLink({
providers: [
{
name: "mistral",
priority: 1,
config: { apiKey: process.env.MISTRAL_API_KEY },
},
{
name: "azure-eu",
priority: 2,
config: {
/* Azure EU region */
},
},
],
compliance: {
framework: "GDPR",
dataResidency: "EU",
},
});
Next Steps
- Choose a provider based on your requirements (free tier, compliance, region)
- Follow the setup guide to get your API key
- Configure NeuroLink with the provider
- Test the integration with a simple request
- Add failover for production reliability
Related Documentation
- Multi-Provider Failover - High availability patterns
- Cost Optimization - Reduce costs by 80-95%
- Compliance & Security - GDPR, SOC2, HIPAA
- Load Balancing - Distribution strategies