Guides
Comprehensive guides for building production-ready AI applications with NeuroLink.
🎯 Essential Guides
Core guides for getting the most out of NeuroLink.
| Guide | Description |
|---|---|
| Provider Selection Guide | Interactive wizard to choose the best provider for your use case |
| GitHub Action Guide | Run AI-powered workflows in GitHub Actions with 13 providers |
| Troubleshooting | Common issues, debugging tips, and solutions for NeuroLink CLI and SDK |
🗄️ Redis & Persistence
Guides for setting up and managing Redis-backed conversation memory.
| Guide | Description |
|---|---|
| Redis Configuration | Production-ready Redis setup with cluster, security, and cloud providers |
| Redis Migration | Migration patterns for upgrading Redis and moving between environments |
See also: Redis Quick Start in Getting Started
Migration Guides
Migrate from other AI frameworks to NeuroLink.
| Guide | Description |
|---|---|
| From LangChain | Complete migration guide from LangChain with concept mapping and examples |
| From Vercel AI SDK | Migrate from Vercel AI SDK with Next.js-focused patterns and streaming examples |
| Migration Guide (Legacy) | General migration guide for older versions |
🏢 Enterprise Guides
Production-ready patterns for enterprise AI deployments.
| Guide | Description |
|---|---|
| Multi-Provider Failover | High availability with automatic failover between providers |
| Load Balancing | Distribute traffic across providers with 6 strategies |
| Cost Optimization | Reduce AI costs by 80-95% with smart routing |
| Compliance & Security | GDPR, SOC2, HIPAA compliance patterns |
| Multi-Region Deployment | Global deployment with geographic routing |
| Monitoring & Observability | Prometheus, Grafana, CloudWatch integration |
| Audit Trails | Comprehensive logging for compliance |
🔧 MCP Integration
Model Context Protocol server catalog and integration patterns.
| Guide | Description |
|---|---|
| Server Catalog | 58+ MCP servers for file systems, databases, APIs, and more |
See also: MCP Tools Showcase for detailed tool documentation
Server Adapters
Deploy NeuroLink as production-ready HTTP APIs.
| Guide | Description |
|---|---|
| Server Adapters Overview | Quick start guide for exposing AI agents as HTTP APIs |
| Hono Adapter | Recommended lightweight adapter for serverless and edge deployments |
| Express Adapter | Integration with existing Express applications |
| Fastify Adapter | High-performance adapter with built-in schema validation |
| Koa Adapter | Modern, minimalist adapter with clean middleware composition |
| Security Guide | Authentication, authorization, and security best practices |
| Deployment Guide | Production deployment patterns with Docker and Kubernetes |
🎨 Framework Integration
Framework-specific integration guides.
| Framework | Description |
|---|---|
| Next.js | App Router, Server Components, Server Actions, Streaming |
| Express.js | RESTful APIs, middleware, authentication, rate limiting |
| SvelteKit | SSR, load functions, form actions, streaming |
💡 Examples
Real-world use cases and production code patterns.
| Guide | Description |
|---|---|
| Use Cases | 12+ production-ready use cases with complete code |
| Code Patterns | Best practices, design patterns, and anti-patterns |
Next Steps
- New to NeuroLink? Start with Quick Start
- Need to choose a provider? Use the Provider Selection Guide
- Building a chat app? Try our Chat Application Tutorial
- Need knowledge base Q&A? Build a RAG System
- Want practical code examples? Check the Cookbook
- Migrating from another framework? See our Migration Guides