Reference
Complete reference documentation for NeuroLink configuration, troubleshooting, and technical details.
🎯 Reference Hub
This section provides comprehensive reference materials for advanced usage, configuration, and problem-solving.
- Troubleshooting — Common issues, error messages, and solutions for NeuroLink CLI and SDK usage.
- Configuration — Complete configuration reference including environment variables, provider settings, and optimization.
- Provider Capabilities Audit — Comprehensive audit of all 13 provider implementations with capability matrices and configuration examples.
- Provider Comparison — Detailed comparison of all 13 supported AI providers with features, costs, and recommendations.
- FAQ — Frequently asked questions about NeuroLink features, limitations, and best practices.
- Error Codes — Complete error code reference with categorized codes, severity levels, and resolution guidance.
- Analytics — Comprehensive guide to NeuroLink analytics, metrics, token tracking, cost monitoring, and observability integration.
- Server Configuration — Configuration reference for server adapters including Hono, Express, Fastify, and Koa framework integration.
- MCP Enhancements API — API reference for MCP enhancements including ToolRouter, ToolCache, RequestBatcher, tool annotations, and elicitation protocol.
🔧 Quick Reference
Environment Variables
# Core Provider API Keys
OPENAI_API_KEY="sk-your-openai-key"
GOOGLE_AI_API_KEY="AIza-your-google-ai-key"
ANTHROPIC_API_KEY="sk-ant-your-key"
# AWS Bedrock (requires AWS credentials)
AWS_ACCESS_KEY_ID="your-access-key"
AWS_SECRET_ACCESS_KEY="your-secret-key"
AWS_REGION="us-east-1"
# Azure OpenAI
AZURE_OPENAI_API_KEY="your-azure-key"
AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
# Google Vertex AI
GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
# Hugging Face
HUGGINGFACE_API_KEY="hf_your-key"
# Mistral AI
MISTRAL_API_KEY="your-mistral-key"
CLI Quick Commands
# Status and diagnostics
neurolink status # Check all providers
neurolink status --verbose # Detailed diagnostics
neurolink provider status # Provider-specific status
# Text generation
neurolink generate "prompt" # Basic generation
neurolink gen "prompt" -p openai # Specific provider
neurolink stream "prompt" # Real-time streaming
# Configuration
neurolink config show # Show current config
neurolink config validate # Validate setup
neurolink config init # Interactive setup
# MCP tools
neurolink mcp discover # Find available servers
neurolink mcp list # List installed servers
neurolink mcp install <server> # Install MCP server
SDK Quick Reference
import { NeuroLink, createBestAIProvider } from "@juspay/neurolink";
// Basic usage
const neurolink = new NeuroLink();
const result = await neurolink.generate({
input: { text: "Your prompt" },
provider: "auto", // or specific provider
});
// Auto-select best provider
const provider = createBestAIProvider();
const result = await provider.generate({
input: { text: "Your prompt" },
});
// With advanced options
const result = await neurolink.generate({
input: { text: "Your prompt" },
provider: "google-ai",
model: "gemini-2.5-pro",
temperature: 0.7,
maxTokens: 1000,
enableAnalytics: true,
enableEvaluation: true,
timeout: "30s",
});
📊 Provider Comparison Matrix
Quick Overview (see Provider Capabilities Audit for complete details):
| Feature | OpenAI | Google AI | Anthropic | Bedrock | Azure | Vertex | HuggingFace | Ollama | Mistral | LiteLLM | SageMaker | OpenRouter | OpenAI Compat |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Free Tier | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | Varies | ❌ | Varies | Varies |
| Tool Support | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ⚠️ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Vision | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ❌ | ✅ | Varies | ✅ | Varies |
| Local | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | Varies |
| Enterprise | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ✅ | ✅ | ✅ | ✅ | ✅ | Varies |
For detailed capability matrices, authentication requirements, and configuration examples, see:
- Provider Capabilities Audit - Technical implementation details
- Provider Comparison - Feature comparison and selection guide
🔍 Error Code Reference
Common Error Codes
| Code | Description | Solution |
|---|---|---|
AUTH_ERROR | Invalid API key or credentials | Check environment variables |
RATE_LIMIT | API rate limit exceeded | Implement delays or upgrade plan |
TIMEOUT | Request timeout | Increase timeout or check network |
MODEL_NOT_FOUND | Invalid model name | Check available models |
TOOL_ERROR | MCP tool execution failed | Check tool configuration |
PROVIDER_UNAVAILABLE | Provider service down | Try different provider |
Debugging Tips
# Enable debug mode
neurolink generate "test" --debug
# Verbose logging
neurolink status --verbose
# Check configuration
neurolink config validate
// SDK debugging
const neurolink = new NeuroLink({
debug: true,
logLevel: "verbose",
});
📈 Performance Optimization
Response Time Optimization
- Provider selection: Use fastest providers for your region
- Model selection: Choose appropriate model size for task
- Concurrency: Limit parallel requests to avoid rate limits
- Caching: Implement response caching for repeated queries
Cost Optimization
- Model selection: Use cost-effective models when possible
- Token management: Optimize prompt length and max tokens
- Provider comparison: Compare costs across providers
- Monitoring: Track usage with analytics
Memory Management
- Streaming: Use streaming for large responses
- Batch processing: Process multiple requests efficiently
- Cleanup: Proper resource cleanup in long-running applications
🔐 Security Best Practices
API Key Management
- Environment variables: Store keys in
.envfiles - Never commit: Keep keys out of version control
- Rotation: Regularly rotate API keys
- Scope limitation: Use least-privilege access
Production Deployment
- Secret management: Use secure secret management systems
- Network security: Implement proper network controls
- Monitoring: Log and monitor API usage
- Error handling: Don't expose sensitive errors
🆘 Getting Help
Support Channels
- GitHub Issues - Bug reports and feature requests
- GitHub Discussions - Community questions
- Documentation - Comprehensive guides and references
- Examples - Practical implementation patterns
Before Asking for Help
- Check the Troubleshooting Guide
- Review the FAQ
- Search existing GitHub Issues
- Try the
--debugflag for more information
Reporting Issues
When reporting issues, include:
- NeuroLink version:
npm list @juspay/neurolink - Node.js version:
node --version - Operating system: OS and version
- Error message: Complete error output
- Reproduction steps: Minimal example to reproduce
- Configuration: Relevant environment variables (without keys)
🔗 External Resources
AI Provider Documentation
- OpenAI API - OpenAI official documentation
- Google AI Studio - Google AI platform docs
- Anthropic Claude - Anthropic API reference
- AWS Bedrock - Amazon Bedrock guide
Related Projects
- Vercel AI SDK - Underlying provider implementations
- Model Context Protocol - Tool integration standard
- TypeScript - Type safety and development