Skip to main content

NeuroLink

🧠 NeuroLink

The Enterprise AI SDK for Production Applications

13 Providers | 58+ MCP Tools | HITL Security | Redis Persistence

npm version npm downloads Build Status Coverage Status License: MIT TypeScript GitHub Stars

Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.

NeuroLink is the universal AI integration platform that unifies 13 major AI providers and 100+ models under one consistent API.

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 13 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.

Where we're headed: We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision →

Get Started in <5 Minutes →


What's New (Q1 2026)

FeatureVersionDescriptionGuide
MCP Enhancementsv9.16.0Advanced MCP features: intelligent tool routing, result caching, request batching, tool annotations, elicitation protocol, custom server creation, multi-server managementMCP Enhancements Guide
Context Compactionv9.2.04-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimationContext Compaction Guide
File Processor Systemv9.1.017+ file type processors with ProcessorRegistry, security sanitization, SVG text injectionFile Processors Guide
Workflow Enginev8.42.0Multi-model orchestration with consensus, multi-judge, fallback, and adaptive workflows. Ensemble execution with intelligent scoring and evaluation.Workflow HLD | Workflow LLD
Docusaurus Documentationv8.41.0Migrated from MkDocs to Docusaurus v3 with enhanced search, versioning, and modern UI. Automated doc syncing and LLM-friendly documentation.Documentation Site
Image Generation with Geminiv8.31.0Native image generation using Gemini 2.0 Flash Experimental (imagen-3.0-generate-002). High-quality image synthesis directly from Google AI.Image Generation Guide
HTTP/Streamable HTTP Transportv8.29.0Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting.HTTP Transport Guide
  • External TracerProvider Support -- Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. -> Observability Guide
  • Server Adapters -- Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with serve and server commands for foreground/background modes, route management, and OpenAPI generation. -> Server Adapters Guide
  • Title Generation Events -- Emit real-time events when conversation titles are auto-generated. Listen to conversation:titleGenerated for session tracking. -> Conversation Memory Guide
  • Custom Title Prompts -- Customize conversation title generation with NEUROLINK_TITLE_PROMPT environment variable. Use ${userMessage} placeholder for dynamic prompts. -> Conversation Memory Guide
  • Video Generation -- Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. -> Video Generation Guide
  • Image Generation -- Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. -> Image Generation Guide
  • HTTP/Streamable HTTP Transport for MCP -- Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. -> HTTP Transport Guide
  • Claude Subscription (OAuth) Support -- Use your Claude Pro/Max/Team subscription with NeuroLink via OAuth authentication, no API key required. -> Subscription Guide
  • Gemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking capabilities
  • Structured Output with Zod Schemas -- Type-safe JSON generation with automatic validation using schema + output.format: "json" in generate(). -> Structured Output Guide
  • CSV File Support -- Attach CSV files to prompts for AI-powered data analysis with auto-detection. -> CSV Guide
  • PDF File Support -- Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. -> PDF Guide
  • 50+ File Types -- Process Excel, Word, RTF, JSON, YAML, XML, HTML, SVG, Markdown, and 50+ code languages with intelligent content extraction. -> File Processors Guide
  • LiteLLM Integration -- Access 100+ AI models from all major providers through unified interface. -> Setup Guide
  • SageMaker Integration -- Deploy and use custom trained models on AWS infrastructure. -> Setup Guide
  • OpenRouter Integration -- Access 300+ models from OpenAI, Anthropic, Google, Meta, and more through a single unified API. -> Setup Guide
  • Human-in-the-loop workflows -- Pause generation for user approval/input before tool execution. -> HITL Guide
  • Guardrails middleware -- Block PII, profanity, and unsafe content with built-in filtering. -> Guardrails Guide
  • Context summarization -- Automatic conversation compression for long-running sessions. -> Summarization Guide
  • Redis conversation export -- Export full session history as JSON for analytics and debugging. -> History Guide
// Multi-Model Workflow Engine (v8.42.0)
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink();

// Run a consensus workflow with multiple models
const result = await neurolink.runConsensusWorkflow({
prompt: "Explain quantum computing",
models: [
{ provider: "anthropic", modelId: "claude-3-5-sonnet-20241022" },
{ provider: "openai", modelId: "gpt-4" },
{ provider: "google-ai", modelId: "gemini-2.0-flash-exp" },
],
judgeModel: { provider: "anthropic", modelId: "claude-3-5-sonnet-20241022" },
options: { temperature: 0.7 },
});

console.log(result.response); // Best response selected by judge
console.log(result.score); // Quality score (0-100)
console.log(result.metrics); // Detailed performance metrics

// Image Generation with Gemini (v8.31.0)
const image = await neurolink.generateImage({
prompt: "A futuristic cityscape",
provider: "google-ai",
model: "imagen-3.0-generate-002",
});

// HTTP Transport for Remote MCP (v8.29.0)
await neurolink.addExternalMCPServer("remote-tools", {
transport: "http",
url: "https://mcp.example.com/v1",
headers: { Authorization: "Bearer token" },
retries: 3,
timeout: 15000,
});

Previous Updates (Q4 2025)
  • Image Generation – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. → Guide
  • Gemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking
  • Structured Output with Zod Schemas – Type-safe JSON generation with automatic validation. → Guide
  • CSV & PDF File Support – Attach CSV/PDF files to prompts with auto-detection. → CSV | PDF
  • LiteLLM & SageMaker – Access 100+ models via LiteLLM, deploy custom models on SageMaker. → LiteLLM | SageMaker
  • OpenRouter Integration – Access 300+ models through a single unified API. → Guide
  • HITL & Guardrails – Human-in-the-loop approval workflows and content filtering middleware. → HITL | Guardrails
  • Redis & Context Management – Session export, conversation history, and automatic summarization. → History

Enterprise Security: Human-in-the-Loop (HITL)

NeuroLink includes a production-ready HITL system for regulated industries and high-stakes AI operations:

CapabilityDescriptionUse Case
Tool Approval WorkflowsRequire human approval before AI executes sensitive toolsFinancial transactions, data modifications
Output ValidationRoute AI outputs through human review pipelinesMedical diagnosis, legal documents
Confidence ThresholdsAutomatically trigger human review below confidence levelCritical business decisions
Complete Audit TrailFull audit logging for compliance (HIPAA, SOC2, GDPR)Regulated industries
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
hitl: {
enabled: true,
requireApproval: ["writeFile", "executeCode", "sendEmail"],
confidenceThreshold: 0.85,
reviewCallback: async (action, context) => {
// Custom review logic - integrate with your approval system
return await yourApprovalSystem.requestReview(action);
},
},
});

// AI pauses for human approval before executing sensitive tools
const result = await neurolink.generate({
input: { text: "Send quarterly report to stakeholders" },
});

Enterprise HITL Guide | Quick Start

Get Started in Two Steps

# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup

# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"

Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop - Learn more →

🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

🤖 AI Provider Integration

13 providers unified under one API - Switch providers with a single parameter change.

ProviderModelsFree TierTool SupportStatusDocumentation
OpenAIGPT-4o, GPT-4o-mini, o1✅ Full✅ ProductionSetup Guide
AnthropicClaude 4.5 Opus/Sonnet/Haiku, Claude 4 Opus/Sonnet✅ Full✅ ProductionSetup Guide | Subscription Guide
Google AI StudioGemini 3 Flash/Pro, Gemini 2.5 Flash/Pro✅ Free Tier✅ Full✅ ProductionSetup Guide
AWS BedrockClaude, Titan, Llama, Nova✅ Full✅ ProductionSetup Guide
Google VertexGemini 3/2.5 (gemini-3-*-preview)✅ Full✅ ProductionSetup Guide
Azure OpenAIGPT-4, GPT-4o, o1✅ Full✅ ProductionSetup Guide
LiteLLM100+ models unifiedVaries✅ Full✅ ProductionSetup Guide
AWS SageMakerCustom deployed models✅ Full✅ ProductionSetup Guide
Mistral AIMistral Large, Small✅ Free Tier✅ Full✅ ProductionSetup Guide
Hugging Face100,000+ models✅ Free⚠️ Partial✅ ProductionSetup Guide
OllamaLocal models (Llama, Mistral)✅ Free (Local)⚠️ Partial✅ ProductionSetup Guide
OpenAI CompatibleAny OpenAI-compatible endpointVaries✅ Full✅ ProductionSetup Guide
OpenRouter200+ Models via OpenRouterVaries✅ Full✅ ProductionSetup Guide

📖 Provider Comparison Guide - Detailed feature matrix and selection criteria 🔬 Provider Feature Compatibility - Test-based compatibility reference for all 19 features across 13 providers


🔧 Built-in Tools & MCP Integration

6 Core Tools (work across all providers, zero configuration):

ToolPurposeAuto-AvailableDocumentation
getCurrentTimeReal-time clock accessTool Reference
readFileFile system readingTool Reference
writeFileFile system writingTool Reference
listDirectoryDirectory listingTool Reference
calculateMathMathematical operationsTool Reference
websearchGroundingGoogle Vertex web search⚠️ Requires credentialsTool Reference

58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// stdio transport - local MCP servers via command execution
await neurolink.addExternalMCPServer("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
transport: "stdio",
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// HTTP transport - remote MCP servers via URL
await neurolink.addExternalMCPServer("github-copilot", {
transport: "http",
url: "https://api.githubcopilot.com/mcp",
headers: { Authorization: "Bearer YOUR_COPILOT_TOKEN" },
timeout: 15000,
retries: 5,
});

// Tools automatically available to AI
const result = await neurolink.generate({
input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

MCP Transport Options:

TransportUse CaseKey Features
stdioLocal serversCommand execution, environment variables
httpRemote serversURL-based, auth headers, retries, rate limiting
sseEvent streamsServer-Sent Events, real-time updates
websocketBi-directionalFull-duplex communication

📖 MCP Integration Guide - Setup external servers 📖 HTTP Transport Guide - Remote MCP server configuration


💻 Developer Experience Features

SDK-First Design with TypeScript, IntelliSense, and type safety:

FeatureDescriptionDocumentation
Auto Provider SelectionIntelligent provider fallbackSDK Guide
Streaming ResponsesReal-time token streamingStreaming Guide
Conversation MemoryAutomatic context managementMemory Guide
Full Type SafetyComplete TypeScript typesType Reference
Error HandlingGraceful provider fallbackError Guide
Analytics & EvaluationUsage tracking, quality scoresAnalytics Guide
Middleware SystemRequest/response hooksMiddleware Guide
Framework IntegrationNext.js, SvelteKit, ExpressFramework Guides
Extended ThinkingNative thinking/reasoning mode for Gemini 3 and Claude modelsThinking Guide

📁 Multimodal & File Processing

17+ file categories supported (50+ total file types including code languages) with intelligent content extraction and provider-agnostic processing:

CategorySupported TypesProcessing
DocumentsExcel (.xlsx, .xls), Word (.docx), RTF, OpenDocumentSheet extraction, text extraction
DataJSON, YAML, XMLValidation, syntax highlighting
MarkupHTML, SVG, Markdown, TextOWASP-compliant sanitization
Code50+ languages (TypeScript, Python, Java, Go, etc.)Language detection, syntax metadata
Config.env, .ini, .toml, .cfgSecure parsing
MediaImages (PNG, JPEG, WebP, GIF), PDFs, CSVProvider-specific formatting
// Process any supported file type
const result = await neurolink.generate({
input: {
text: "Analyze this data and code",
files: [
"./data.xlsx", // Excel spreadsheet
"./config.yaml", // YAML configuration
"./diagram.svg", // SVG (injected as sanitized text)
"./main.py", // Python source code
],
},
});

// CLI: Use --file for any supported type
// neurolink generate "Analyze this" --file ./report.xlsx --file ./config.json

Key Features:

  • ProcessorRegistry - Priority-based processor selection with fallback
  • OWASP Security - HTML/SVG sanitization prevents XSS attacks
  • Auto-detection - FileDetector identifies file types by extension and content
  • Provider-agnostic - All processors work across all 13 AI providers

📖 File Processors Guide - Complete reference for all file types


🏢 Enterprise & Production Features

Production-ready capabilities for regulated industries:

FeatureDescriptionUse CaseDocumentation
Enterprise ProxyCorporate proxy supportBehind firewallsProxy Setup
Redis MemoryDistributed conversation stateMulti-instance deploymentRedis Guide
Cost OptimizationAutomatic cheapest model selectionBudget controlCost Guide
Multi-Provider FailoverAutomatic provider switchingHigh availabilityFailover Guide
Telemetry & MonitoringOpenTelemetry integrationObservabilityTelemetry Guide
Security HardeningCredential management, auditingComplianceSecurity Guide
Custom Model HostingSageMaker integrationPrivate modelsSageMaker Guide
Load BalancingLiteLLM proxy integrationScale & routingLoad Balancing

Security & Compliance:

  • ✅ SOC2 Type II compliant deployments
  • ✅ ISO 27001 certified infrastructure compatible
  • ✅ GDPR-compliant data handling (EU providers available)
  • ✅ HIPAA compatible (with proper configuration)
  • ✅ Hardened OS verified (SELinux, AppArmor)
  • ✅ Zero credential logging
  • ✅ Encrypted configuration storage
  • ✅ Automatic context window management with 4-stage compaction pipeline and 80% budget gate

📖 Enterprise Deployment Guide - Complete production checklist


Enterprise Persistence: Redis Memory

Production-ready distributed conversation state for multi-instance deployments:

Capabilities

FeatureDescriptionBenefit
Distributed MemoryShare conversation context across instancesHorizontal scaling
Session ExportExport full history as JSONAnalytics, debugging, audit
Auto-DetectionAutomatic Redis discovery from environmentZero-config in containers
Graceful FailoverFalls back to in-memory if Redis unavailableHigh availability
TTL ManagementConfigurable session expirationMemory management

Quick Setup

import { NeuroLink } from "@juspay/neurolink";

// Auto-detect Redis from REDIS_URL environment variable
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
store: "redis", // Automatically uses REDIS_URL
ttl: 86400, // 24-hour session expiration
},
});

// Or explicit configuration
const neurolinkExplicit = new NeuroLink({
conversationMemory: {
enabled: true,
store: "redis",
redis: {
host: "redis.example.com",
port: 6379,
password: process.env.REDIS_PASSWORD,
tls: true, // Enable for production
},
},
});

// Export conversation for analytics
const history = await neurolink.exportConversation({ format: "json" });
await saveToDataWarehouse(history);

Docker Quick Start

# Start Redis
docker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine

# Configure NeuroLink
export REDIS_URL=redis://localhost:6379

# Start your application
node your-app.js

Redis Setup Guide | Production Configuration | Migration Patterns


🎨 Professional CLI

15+ commands for every workflow:

CommandPurposeExampleDocumentation
setupInteractive provider configurationneurolink setupSetup Guide
generateText generationneurolink gen "Hello"Generate
streamStreaming generationneurolink stream "Story"Stream
statusProvider health checkneurolink statusStatus
loopInteractive sessionneurolink loopLoop
mcpMCP server managementneurolink mcp discoverMCP CLI
modelsModel listingneurolink modelsModels
evalModel evaluationneurolink evalEval
serveStart HTTP server in foreground modeneurolink serveServe
server startStart HTTP server in background modeneurolink server startServer
server stopStop running background serverneurolink server stopServer
server statusShow server status informationneurolink server statusServer
server routesList all registered API routesneurolink server routesServer
server configView or modify server configurationneurolink server configServer
server openapiGenerate OpenAPI specificationneurolink server openapiServer

📖 Complete CLI Reference - All commands and options


🤖 GitHub Action

Run AI-powered workflows directly in GitHub Actions with 13 provider support and automatic PR/issue commenting.

- uses: juspay/neurolink@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: "Review this PR for security issues and code quality"
post_comment: true
FeatureDescription
Multi-Provider13 providers with unified interface
PR/Issue CommentsAuto-post AI responses with intelligent updates
Multimodal SupportAttach images, PDFs, CSVs, Excel, Word, JSON, YAML, XML, HTML, SVG, code files to prompts
Cost TrackingBuilt-in analytics and quality evaluation
Extended ThinkingDeep reasoning with thinking tokens

📖 GitHub Action Guide - Complete setup and examples


💰 Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • 💰 Automatic Cost Optimization: Selects cheapest models for simple tasks
  • 🔄 LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • 🔍 Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚡ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

Revolutionary Interactive CLI

NeuroLink's CLI goes beyond simple commands - it's a full AI development environment:

Why Interactive Mode Changes Everything

FeatureTraditional CLINeuroLink Interactive
Session StateNoneFull persistence
MemoryPer-commandConversation-aware
ConfigurationFlags per command/set persists across session
Tool TestingManual per toolLive discovery & testing
StreamingOptionalReal-time default

Live Demo: Development Session

$ npx @juspay/neurolink loop --enable-conversation-memory

neurolink > /set provider vertex
✓ provider set to vertex (Gemini 3 support enabled)

neurolink > /set model gemini-3-flash-preview
✓ model set to gemini-3-flash-preview

neurolink > Analyze my project architecture and suggest improvements

✓ Analyzing your project structure...
[AI provides detailed analysis, remembering context]

neurolink > Now implement the first suggestion
[AI remembers previous context and implements suggestion]

neurolink > /mcp discover
✓ Discovered 58 MCP tools:
GitHub: create_issue, list_repos, create_pr...
PostgreSQL: query, insert, update...
[full list]

neurolink > Use the GitHub tool to create an issue for this improvement
✓ Creating issue... (requires HITL approval if configured)

neurolink > /export json > session-2026-01-01.json
✓ Exported 15 messages to session-2026-01-01.json

neurolink > exit
Session saved. Resume with: neurolink loop --session session-2026-01-01.json

Session Commands Reference

CommandPurpose
/set <key> <value>Persist configuration (provider, model, temperature)
/mcp discoverList all available MCP tools
/export jsonExport conversation to JSON
/historyView conversation history
/clearClear context while keeping settings

Interactive CLI Guide | CLI Reference

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
--provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
--enable-analytics --enable-evaluation --format json
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
store: "redis",
},
enableOrchestration: true,
});

const result = await neurolink.generate({
input: {
text: "Create a comprehensive analysis",
files: [
"./sales_data.csv", // Auto-detected as CSV
"examples/data/invoice.pdf", // Auto-detected as PDF
"./diagrams/architecture.png", // Auto-detected as image
"./report.xlsx", // Auto-detected as Excel
"./config.json", // Auto-detected as JSON
"./diagram.svg", // Auto-detected as SVG (injected as text)
"./app.ts", // Auto-detected as TypeScript code
],
},
provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
enableEvaluation: true,
region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

Gemini 3 with Extended Thinking

import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink();

// Use Gemini 3 with extended thinking for complex reasoning
const result = await neurolink.generate({
input: {
text: "Solve this step by step: What is the optimal strategy for...",
},
provider: "vertex",
model: "gemini-3-flash-preview",
thinkingLevel: "medium", // Options: "minimal", "low", "medium", "high"
});

console.log(result.content);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

CapabilityHighlights
Provider unification13+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).
Multimodal pipelineStream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types.
Quality & governanceAuto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.
Memory & contextConversation memory, Redis history export (Q4), context summarization (Q4).
CLI toolingLoop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.
Enterprise opsProxy support, regional routing (Q3), telemetry hooks, configuration management.
Tool ecosystemMCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search.

Documentation Map

AreaWhen to UseLink
Getting startedInstall, configure, run first promptdocs/getting-started/index.md
Feature guidesUnderstand new functionality front-to-backdocs/features/index.md
CLI referenceCommand syntax, flags, loop sessionsdocs/cli/index.md
SDK referenceClasses, methods, optionsdocs/sdk/index.md
IntegrationsLiteLLM, SageMaker, MCPdocs/litellm-integration.md
AdvancedMiddleware, architecture, streaming patternsdocs/advanced/index.md
CookbookPractical recipes for common patternsdocs/cookbook/index.md
GuidesMigration, Redis, troubleshooting, provider selectiondocs/guides/index.md
OperationsConfiguration, troubleshooting, provider matrixdocs/reference/index.md

New in 2026: Enhanced Documentation

Enterprise Features:

Provider Intelligence:

Middleware System:

Redis & Persistence:

Migration Guides:

Developer Experience:

Integrations

Contributing & Support

  • Bug reports and feature requests → GitHub Issues
  • Development workflow, testing, and pull request guidelines → docs/development/contributing.md
  • Documentation improvements → open a PR referencing the documentation matrix.

NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.