# NeuroLink Documentation (Summary) > Enterprise AI Development Platform - Unified provider access, MCP integration, professional CLI Generated: 2026-02-27T07:54:12.405Z Full documentation: https://docs.neurolink.ink/llms-full.txt --- ## Project Overview NeuroLink is an enterprise AI development platform that provides: - Unified access to 13+ AI providers through a single consistent API - 58+ MCP (Model Context Protocol) tools and integrations - TypeScript SDK and professional CLI - Production-ready features: Redis memory, failover, telemetry - Multimodal support: text, images, PDFs, CSV, audio, video ## Supported Providers - **OpenAI** (`openai`) - **Anthropic Claude** (`anthropic`) - **Google AI Studio (Gemini)** (`google-ai`) - **Google Vertex AI** (`vertex`) - **AWS Bedrock** (`bedrock`) - **Azure OpenAI** (`azure`) - **Mistral AI** (`mistral`) - **LiteLLM (100+ models)** (`litellm`) - **OpenRouter** (`openrouter`) - **Ollama (Local)** (`ollama`) - **Hugging Face** (`huggingface`) - **AWS SageMaker** (`sagemaker`) - **OpenAI-Compatible** (`openai-compatible`) ## API Signatures Summary ### Core Methods - `neurolink.generate(options)` - Generate text response - `neurolink.stream(options)` - Stream text response - `neurolink.generateImage(options)` - Generate images (Gemini/Vertex) ### Configuration - `new NeuroLink(config)` - Initialize SDK - `neurolink.addExternalMCPServer(name, config)` - Add MCP server - `neurolink.registerTool(tool)` - Register custom tool ### Common Options - `provider` - AI provider slug (openai, anthropic, google-ai, etc.) - `model` - Model name - `input.text` - Prompt text - `input.images` - Image attachments - `maxTokens` - Response length limit - `temperature` - Creativity (0-1) - `thinkingLevel` - Extended thinking (minimal, low, medium, high) - `structuredOutput` - Zod schema for typed responses ### CLI Commands - `neurolink generate ` - Generate text - `neurolink stream ` - Stream text - `neurolink loop` - Interactive session - `neurolink setup` - Configure providers - `neurolink status` - Check provider health - `neurolink mcp list` - List MCP tools --- ## Table of Contents ### Introduction - NeuroLink ### Getting Started - Getting Started - AI Provider Guides - Quick Start - Installation - Environment Variables Configuration Guide - AWS Bedrock Provider Guide - Azure OpenAI Provider Guide - Google AI Studio Provider Guide - ⚙️ Provider Configuration Guide - Google Vertex AI Provider Guide - Hugging Face Provider Guide - Redis Quick Start (5 Minutes) - LiteLLM Provider Guide - Mistral AI Provider Guide - Ollama Setup Guide - OpenAI-Compatible Providers Guide - OpenRouter Provider Guide - SageMaker Integration - Deploy Your Custom AI Models ### SDK Reference - SDK Reference - API Reference ### CLI - CLI Command Reference ### Features - Feature Guides - Audio Input & Transcription Guide - Auto Evaluation Engine - CLI Loop Sessions - Context Compaction - Redis Conversation History Export - CSV File Support - Enterprise Human-in-the-Loop System - File Processors Guide - Guardrails AI Integration with Middleware - Guardrails Implementation Guide - Guardrails Middleware - Human-in-the-Loop (HITL) Workflows - Image Generation Streaming Guide - Interactive CLI - Your AI Development Environment - MCP Tools Ecosystem - 58+ Integrations - Memory Guide - Multimodal Chat Experiences - Multimodal Capabilities Guide - Observability Guide - Office Documents Support - PDF File Support - Provider Orchestration Brain - RAG Document Processing Guide - Real-time Services Guide - Regional Streaming Controls - Speech-to-Speech Agents: Architecture and Gemini Live Integration Plan - Structured Output with Zod Schemas - Extended Thinking Configuration - Text-to-Speech (TTS) Integration Guide - Video Analysis - Video Generation with Veo 3.1 ### Examples - Examples & Tutorials - Advanced Examples - Basic Usage Examples - Business Applications - Tool Blocking Feature Example - Use Cases & Applications --- # Introduction ## NeuroLink NeuroLink The Enterprise AI SDK for Production Applications 13 Providers | 58+ MCP Tools | HITL Security | Redis Persistence [[Image: npm version]](https://www.npmjs.com/package/@juspay/neurolink) [[Image: npm downloads]](https://www.npmjs.com/package/@juspay/neurolink) [[Image: Build Status]](https://github.com/juspay/neurolink/actions/workflows/ci.yml) [[Image: Coverage Status]](https://coveralls.io/github/juspay/neurolink?branch=main) [[Image: License: MIT]](https://opensource. [Content truncated - see llms-full.txt for complete documentation] --- # Getting Started ## Getting Started # Getting Started Welcome to NeuroLink! This section will help you get up and running quickly with the Enterprise AI Development Platform. ## What You'll Learn - ⏱️ **[Quick Start](/docs/getting-started/quick-start)** Get NeuroLink working in under 2 minutes with basic examples for both CLI and SDK usage. - **[Installation](/docs/getting-started/installation)** Detailed installation instructions for different environments and package managers. [Content truncated - see llms-full.txt for complete documentation] --- ## AI Provider Guides # AI Provider Guides Complete setup guides for all supported AI providers. ## Enterprise Providers Production-grade providers for enterprise deployments: ### [Azure OpenAI](/docs/getting-started/providers/azure-openai) **Enterprise AI with Microsoft Azure** - SOC2, HIPAA, ISO 27001 compliant - Multi-region deployment (30+ regions) - ️ Private endpoints with VNet - Enterprise SLAs [Setup Guide →](/docs/getting-started/providers/azure-openai) [Content truncated - see llms-full.txt for complete documentation] --- ## Quick Start # Quick Start Get NeuroLink running in under 2 minutes with this quick start guide. ## Prerequisites - **Node.js 18+** - **npm/pnpm/yarn** package manager - **API key** for at least one AI provider (we recommend starting with Google AI Studio - it has a free tier) ## ⚡ 1-Minute Setup ### Option 1: CLI Usage (No Installation) ```bash # Set up your API key (Google AI Studio has free tier) export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key" # Generate text instantly [Content truncated - see llms-full.txt for complete documentation] --- ## Installation # Installation Complete installation guide for NeuroLink CLI and SDK across different environments. ## Choose Your Installation Method ```bash # Direct usage (recommended) npx @juspay/neurolink generate "Hello, AI" # Global installation (optional) npm install -g @juspay/neurolink neurolink generate "Hello, AI" ``` ```bash # npm npm install @juspay/neurolink # pnpm pnpm add @juspay/neurolink # yarn yarn add @juspay/neurolink ``` ```bash git clone https://github.com/juspay/neurolink [Content truncated - see llms-full.txt for complete documentation] --- ## Environment Variables Configuration Guide # Environment Variables Configuration Guide This guide provides comprehensive setup instructions for all AI providers supported by NeuroLink. The CLI automatically loads environment variables from `.env` files, making configuration seamless. ## Quick Setup ### Automatic .env Loading ✨ NEW! NeuroLink CLI automatically loads environment variables from `.env` files in your project directory: ```bash # Create .env file (automatically loaded) echo 'OPENAI_API_KEY="sk-your-key"' > .env [Content truncated - see llms-full.txt for complete documentation] --- ## AWS Bedrock Provider Guide # AWS Bedrock Provider Guide **Enterprise AI with Claude, Llama, Mistral, and more on AWS infrastructure** ------------- | -------------------------------------- | --------------------------- | | **Anthropic** | Claude 3.5 Sonnet, Claude 3 Opus/Haiku | Complex reasoning, coding | | **Meta** | Llama 3.1 (8B, 70B, 405B) | Open source, cost-effective | | **Mistral AI** | Mistral Large, Mixtral 8x7B | European compliance, coding | [Content truncated - see llms-full.txt for complete documentation] --- ## Azure OpenAI Provider Guide # Azure OpenAI Provider Guide **Enterprise-grade OpenAI models with Microsoft Azure infrastructure and compliance** ## Quick Start ### 1. Create Azure OpenAI Resource ```bash # Via Azure CLI az cognitiveservices account create \ --name my-openai-resource \ --resource-group my-resource-group \ --location eastus \ --kind OpenAI \ --sku S0 ``` Or use [Azure Portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesOpenAI): 1. Search for "Azure OpenAI" 2. Click "Create" 3. [Content truncated - see llms-full.txt for complete documentation] --- ## Google AI Studio Provider Guide # Google AI Studio Provider Guide **Direct access to Google's Gemini models with generous free tier and simple API key authentication** ## Quick Start ### 1. Get Your API Key 1. Visit [Google AI Studio](https://aistudio.google.com/) 2. Sign in with your Google account (no GCP project needed) 3. Click **Get API Key** in the top navigation 4. Click **Create API Key** 5. Copy the generated key (starts with `AIza`) ### 2. Configure NeuroLink Add to your `.env` file: ```bash [Content truncated - see llms-full.txt for complete documentation] --- ## ⚙️ Provider Configuration Guide # ⚙️ Provider Configuration Guide NeuroLink supports multiple AI providers with flexible authentication methods. This guide covers complete setup for all supported providers. ## Supported Providers - **OpenAI** - GPT-4o, GPT-4o-mini, GPT-4-turbo - **Amazon Bedrock** - Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 3 Haiku - **Amazon SageMaker** - Custom models deployed on SageMaker endpoints - **Google Vertex AI** - Gemini 3 Flash/Pro (preview), Gemini 2.5 Flash, Claude 4.0 Sonnet [Content truncated - see llms-full.txt for complete documentation] --- ## Google Vertex AI Provider Guide # Google Vertex AI Provider Guide **Enterprise AI on Google Cloud with Claude, Gemini, and custom models** ## Quick Start ### 1. Create GCP Project ```bash # Create project gcloud projects create my-ai-project --name="My AI Project" # Set project gcloud config set project my-ai-project # Enable Vertex AI API gcloud services enable aiplatform.googleapis.com ``` ### 2. Setup Authentication **Option A: Service Account (Production)** ```bash # Create service account [Content truncated - see llms-full.txt for complete documentation] --- ## Hugging Face Provider Guide # Hugging Face Provider Guide **Access 100,000+ open-source AI models through Hugging Face's free inference API** ## Quick Start ### 1. Get Your API Token 1. Visit [Hugging Face](https://huggingface.co/) 2. Create a free account (no credit card required) 3. Go to [Settings → Access Tokens](https://huggingface.co/settings/tokens) 4. Click "New token" 5. Give it a name (e.g., "NeuroLink") 6. Select "Read" permissions 7. Copy the token (starts with `hf_...`) ### 2. Configure NeuroLink [Content truncated - see llms-full.txt for complete documentation] --- ## Redis Quick Start (5 Minutes) # Redis Quick Start (5 Minutes) Get Redis storage up and running with NeuroLink in under 5 minutes. ## Prerequisites - Docker installed **OR** Redis installed locally - NeuroLink SDK installed (`pnpm add @juspay/neurolink`) ## Option 1: Docker (Recommended) The fastest way to get Redis running for development and testing. ### Start Redis Container ```bash # Start Redis with persistence docker run -d \ --name neurolink-redis \ -p 6379:6379 \ -v redis-data:/data \ redis:7-alpine [Content truncated - see llms-full.txt for complete documentation] --- ## LiteLLM Provider Guide # LiteLLM Provider Guide **Access 100+ AI providers through a unified OpenAI-compatible proxy with advanced features** ## Quick Start ### Option 1: Direct Integration (SDK Only) Use LiteLLM directly in your code without running a proxy server. #### 1. Install LiteLLM ```bash pip install litellm ``` #### 2. Configure NeuroLink ```bash # Add provider API keys to .env OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... GOOGLE_AI_API_KEY=AIza... ``` #### 3. Use via LiteLLM Python Client [Content truncated - see llms-full.txt for complete documentation] --- ## Mistral AI Provider Guide # Mistral AI Provider Guide **European AI excellence with GDPR compliance and competitive free tier** ## Quick Start ### 1. Get Your API Key 1. Visit [Mistral AI Console](https://console.mistral.ai/) 2. Create a free account 3. Go to "API Keys" section 4. Click "Create new key" 5. Copy the key (format: `xxx...`) ### 2. Configure NeuroLink Add to your `.env` file: ```bash MISTRAL_API_KEY=your_api_key_here ``` ### 3. Test the Setup ```bash # CLI - Test with default model [Content truncated - see llms-full.txt for complete documentation] --- ## Ollama Setup Guide # Ollama Setup Guide Complete guide for setting up Ollama with NeuroLink for local AI capabilities. ## macOS Installation ### Method 1: Homebrew (Recommended) ```bash # Install Ollama brew install ollama # Start Ollama service (auto-starts on install) ollama serve ``` ### Method 2: Direct Download 1. Download from [ollama.ai](https://ollama.ai) 2. Open the .dmg file 3. Drag Ollama to Applications 4. Launch from Applications ### Verify Installation ```bash ollama --version ollama list [Content truncated - see llms-full.txt for complete documentation] --- ## OpenAI-Compatible Providers Guide # OpenAI Compatible Provider Guide **Connect to any OpenAI-compatible API: OpenRouter, vLLM, LocalAI, and more** ---------------------- | ------------------------------------ | ---------------------- | | **OpenRouter** | AI provider aggregator (100+ models) | Multi-provider access | | **vLLM** | High-performance inference server | Self-hosted models | | **LocalAI** | Local OpenAI alternative | Privacy, offline usage | [Content truncated - see llms-full.txt for complete documentation] --- ## OpenRouter Provider Guide # OpenRouter Provider Guide **Access 300+ AI models from 60+ providers through a single unified API** ## Quick Start ### 1. Get Your API Key Sign up at [https://openrouter.ai](https://openrouter.ai) and get your API key from [https://openrouter.ai/keys](https://openrouter.ai/keys). ### 2. Configure Environment Add your API key to `.env`: ```bash # Required OPENROUTER_API_KEY=sk-or-v1-... # Optional: Attribution (shows in OpenRouter dashboard) OPENROUTER_REFERER=https://yourapp.com [Content truncated - see llms-full.txt for complete documentation] --- ## SageMaker Integration - Deploy Your Custom AI Models # SageMaker Integration - Deploy Your Custom AI Models > **FULLY IMPLEMENTED**: NeuroLink now supports Amazon SageMaker, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface. All features documented below are complete and production-ready. ## What is SageMaker Integration? SageMaker integration transforms NeuroLink into a platform for custom AI model deployment, offering: [Content truncated - see llms-full.txt for complete documentation] --- # SDK Reference ## SDK Reference # SDK Reference The NeuroLink SDK provides a TypeScript-first programmatic interface for integrating AI capabilities into your applications. ## Overview The SDK is designed for: - **Web applications** (React, Vue, Svelte, Angular) - **Backend services** (Node.js, Express, Fastify) - **Serverless functions** (Vercel, Netlify, AWS Lambda) - **Desktop applications** (Electron, Tauri) ## Quick Start ```typescript const neurolink = new NeuroLink(); // Generate text [Content truncated - see llms-full.txt for complete documentation] --- ## API Reference # API Reference Complete API reference for NeuroLink. ## Core API ### Generate Text ```http POST /api/generate ``` ### Stream Text ```http POST /api/stream ``` ### Provider Status ```http GET /api/status ``` ## MCP Integration ### List MCP Tools ```http GET /api/mcp/tools ``` ### Execute MCP Tool ```http POST /api/mcp/execute ``` ### MCP Server Status ```http GET /api/mcp/status ``` For complete API documentation, see [API Reference](/docs/sdk/api-reference). --- # CLI ## CLI Command Reference # CLI Command Reference The NeuroLink CLI mirrors the SDK. Every command shares consistent options and outputs so you can prototype in the terminal and port the workflow to code later. ## Install or Run Ad-hoc ```bash # Run without installation npx @juspay/neurolink --help # Install globally npm install -g @juspay/neurolink # Local project dependency npm install @juspay/neurolink ``` ## Command Map [Content truncated - see llms-full.txt for complete documentation] --- # Features ## Feature Guides # Feature Guides Comprehensive guides for all NeuroLink features organized by category. Each guide includes setup, usage patterns, configuration, and troubleshooting. ----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | [Content truncated - see llms-full.txt for complete documentation] --- ## Audio Input & Transcription Guide # Audio Input & Voice Conversations Guide NeuroLink provides comprehensive audio input capabilities, enabling real-time voice conversations with AI models. This guide covers currently available features, audio specifications, and upcoming enhancements. ## Overview ### Currently Available NeuroLink supports the following audio capabilities today: - **Real-time voice conversations** via Gemini Live (Google AI Studio) - **Text-to-Speech (TTS) output** via Google Cloud TTS integration [Content truncated - see llms-full.txt for complete documentation] --- ## Auto Evaluation Engine # Auto Evaluation Engine NeuroLink 7.46.0 adds an automated quality gate that scores every response using an LLM-as-judge pipeline. Scores, rationales, and severity flags are surfaced in both CLI and SDK workflows so you can monitor drift and enforce minimum quality thresholds. ## What It Does - Generates a structured evaluation payload (`result.evaluation`) for every call with `enableEvaluation: true`. [Content truncated - see llms-full.txt for complete documentation] --- ## CLI Loop Sessions # CLI Loop Sessions `neurolink loop` delivers a persistent CLI workspace so you can explore prompts, tweak parameters, and inspect state without restarting the CLI. Session variables, Redis-backed history, and built-in help turn the CLI into a playground for prompt engineering and operator runbooks. ## Why Loop Mode - **Stateful sessions** – keep provider/model/temperature context between commands. - **Memory on demand** – enable in-memory or Redis-backed conversation history per session. [Content truncated - see llms-full.txt for complete documentation] --- ## Context Compaction # Context Compaction ## Overview NeuroLink's Context Compaction system automatically manages conversation context windows, preventing overflow errors and maintaining conversation quality as sessions grow longer. It runs transparently before every `generate()` and `stream()` call. [Content truncated - see llms-full.txt for complete documentation] --- ## Redis Conversation History Export # Redis Conversation History Export > **Since**: v7.38.0 | **Status**: Stable | **Availability**: SDK + CLI ## Overview **What it does**: Export complete conversation session history from Redis storage as JSON for analytics, debugging, and compliance auditing. **Why use it**: Access structured conversation data for analysis, user behavior insights, quality assurance, and debugging failed sessions. Essential for production observability. **Common use cases**: [Content truncated - see llms-full.txt for complete documentation] --- ## CSV File Support # CSV File Support NeuroLink provides seamless CSV file support as a **multimodal input type** - attach CSV files directly to your AI prompts for data analysis, insights, and processing. ## Overview CSV support in NeuroLink works just like image support - it's a multimodal input that gets automatically processed and injected into your prompts. The system: 1. **Auto-detects** CSV files using FileDetector (magic bytes, MIME types, extensions, content heuristics) 2. [Content truncated - see llms-full.txt for complete documentation] --- ## Enterprise Human-in-the-Loop System # Enterprise Human-in-the-Loop System > **Since**: v7.39.0 | **Status**: Production Ready | **Availability**: SDK & CLI :::note[Feature Status - Enterprise HITL] This document describes enterprise HITL features. Some advanced features (marked as "Planned") are not yet implemented and represent the target API design for future releases. ::: **Currently Available:** Basic HITL with `dangerousActions`, `timeout`, `autoApproveOnTimeout`, `allowArgumentModification`, and `auditLogging`. [Content truncated - see llms-full.txt for complete documentation] --- ## File Processors Guide # File Processors Guide NeuroLink includes a comprehensive file processing system that supports 20+ file types with intelligent content extraction, security sanitization, and provider-agnostic formatting. This system enables seamless multimodal AI interactions across all 13 supported providers. ## Overview The file processor system is organized into a modular architecture: ``` src/lib/processors/ ├── base/ # BaseFileProcessor abstract class and types [Content truncated - see llms-full.txt for complete documentation] --- ## Guardrails AI Integration with Middleware # Guardrails AI Integration with Middleware This document outlines the modern, simplified approach to integrating Guardrails AI with the NeuroLink platform using the new `MiddlewareFactory`. This enhances the safety, reliability, and security of your AI applications in a modular and maintainable way. ## Overview Guardrails AI is an open-source library that provides a framework for creating and managing guardrails for large language models (LLMs). [Content truncated - see llms-full.txt for complete documentation] --- ## Guardrails Implementation Guide # Guardrails Implementation Guide This document provides comprehensive documentation for the NeuroLink guardrails implementation, including pre-call filtering, content sanitization, and AI-powered evaluation. ## Overview The guardrails implementation provides advanced content filtering and safety mechanisms for AI interactions. It includes: - **Pre-call Evaluation**: AI-powered safety assessment before processing - **Content Filtering**: Bad words and regex pattern filtering [Content truncated - see llms-full.txt for complete documentation] --- ## Guardrails Middleware # Guardrails Middleware > **Since**: v7.42.0 | **Status**: Stable | **Availability**: SDK (CLI + SDK) ## Overview **What it does**: Guardrails middleware provides real-time content filtering and policy enforcement for AI model outputs, blocking profanity, PII, unsafe content, and custom-defined terms. **Why use it**: Protect your application from generating harmful, inappropriate, or non-compliant content. Ensures AI responses meet safety standards and regulatory requirements. [Content truncated - see llms-full.txt for complete documentation] --- ## Human-in-the-Loop (HITL) Workflows # Human-in-the-Loop (HITL) Workflows > **Since**: v7.39.0 | **Status**: Stable | **Availability**: SDK ## Overview **What it does**: HITL pauses AI tool execution to request explicit user approval before performing risky operations like deleting files, modifying databases, or making expensive API calls. **Why use it**: Prevent costly mistakes and give users control over potentially dangerous AI actions. Think of it as an "Are you sure?" dialog for AI assistant operations. [Content truncated - see llms-full.txt for complete documentation] --- ## Image Generation Streaming Guide # Image Generation Streaming Guide ## Overview NeuroLink supports image generation through AI models like Google Vertex AI's `gemini-3-pro-image-preview` and `gemini-2.5-flash-image`. This guide explains how image generation works in both `generate()` and `stream()` modes, including CLI usage with automatic file saving, technical architecture, and usage examples. ## Table of Contents 1. [Architecture Overview](#architecture-overview) 2. [Streaming Modes](#streaming-modes) 3. [Content truncated - see llms-full.txt for complete documentation] --- ## Interactive CLI - Your AI Development Environment # Interactive CLI: Your AI Development Environment > **Since**: v7.0.0 | **Status**: Production Ready | **Availability**: CLI ## Why Interactive Mode? NeuroLink's Interactive CLI transforms traditional command-line usage into a persistent development environment optimized for AI workflow iteration. [Content truncated - see llms-full.txt for complete documentation] --- ## MCP Tools Ecosystem - 58+ Integrations # MCP Tools Ecosystem: 58+ Integrations > **Since**: v7.0.0 | **Status**: Production Ready | **MCP Version**: 2024-11-05 ## Overview NeuroLink's Model Context Protocol (MCP) integration provides a **universal plugin system** that transforms the SDK from a simple AI interface into a complete AI development platform. [Content truncated - see llms-full.txt for complete documentation] --- ## Memory Guide # Memory Guide > **Since**: v9.12.0 | **Status**: Stable | **Availability**: SDK ## Overview NeuroLink includes a **memory engine** powered by the `@juspay/hippocampus` SDK. Unlike conversation memory (which tracks recent turns in a session), memory maintains a **condensed summary** of durable facts about each user across all conversations. Key characteristics: - **Per-user**: Each user gets an independent memory store keyed by `userId` [Content truncated - see llms-full.txt for complete documentation] --- ## Multimodal Chat Experiences NeuroLink 7.47.0 introduces full multimodal pipelines so you can mix text, URLs, and local images in a single interaction. The CLI, SDK, and loop sessions all use the same message builder, ensuring parity across workflows. ## Video Generation {#video-generation} NeuroLink supports **video generation** from images using Google's Veo 3.1 model via Vertex AI. Transform static images into 8-second videos with synchronized audio. ```typescript const result = await neurolink.generate({ input: { [Content truncated - see llms-full.txt for complete documentation] --- ## Multimodal Capabilities Guide # Multimodal Capabilities Guide NeuroLink provides comprehensive multimodal support, allowing you to combine text with various media types in a single AI interaction. This guide covers all supported input types, provider capabilities, and best practices. ## Overview **Supported Input Types:** - **Images** - JPEG, PNG, GIF, WebP, HEIC (vision-capable models) - **PDFs** - Document analysis and content extraction - **CSV/Spreadsheets** - Data analysis and tabular content processing [Content truncated - see llms-full.txt for complete documentation] --- ## Observability Guide # Observability Guide Enterprise-grade observability for AI operations with Langfuse and OpenTelemetry integration. ## Overview NeuroLink provides comprehensive observability features for monitoring AI operations in production: - **Langfuse Integration**: LLM-specific observability with token tracking, cost analysis, and trace visualization - **OpenTelemetry Support**: Standard distributed tracing compatible with Jaeger, Zipkin, and other backends [Content truncated - see llms-full.txt for complete documentation] --- ## Office Documents Support # Office Documents Support NeuroLink provides seamless Office document support as a **multimodal input type** - attach DOCX, PPTX, and XLSX documents directly to your AI prompts for document analysis, data extraction, and content processing. ## Overview Office document support in NeuroLink works as a native multimodal input - the system automatically processes Office files and passes them to the AI provider's document understanding capabilities. The system: 1. [Content truncated - see llms-full.txt for complete documentation] --- ## PDF File Support # PDF File Support NeuroLink provides seamless PDF file support as a **multimodal input type** - attach PDF documents directly to your AI prompts for document analysis, information extraction, and content processing. ## Overview PDF support in NeuroLink works as a native multimodal input - the system automatically processes PDF files and passes them directly to the AI provider's vision/document understanding capabilities. The system: 1. [Content truncated - see llms-full.txt for complete documentation] --- ## Provider Orchestration Brain # Provider Orchestration Brain The orchestration engine introduced in 7.42.0 pairs a task classifier with a provider/model router. When enabled, NeuroLink inspects each prompt, chooses the most suitable provider/model based on capabilities and availability, and carries that preference through the fallback chain. ## Highlights - **Binary task classifier** – categorises prompts (analysis vs. creative, etc.) before routing. [Content truncated - see llms-full.txt for complete documentation] --- ## RAG Document Processing Guide # RAG Document Processing Guide > **Since**: v8.44.0 | **Status**: Stable | **Availability**: SDK + CLI > **Provider Defaults:** When `--provider` (CLI) or `provider` (SDK) is not specified, NeuroLink defaults to **Vertex AI** with **gemini-2.5-flash**. Set the `NEUROLINK_PROVIDER` or `AI_PROVIDER` environment variable to change the default provider. ## Overview NeuroLink provides enterprise-grade RAG (Retrieval-Augmented Generation) capabilities for building production AI applications: [Content truncated - see llms-full.txt for complete documentation] --- ## Real-time Services Guide # Real-time Services Guide **Enterprise WebSocket Infrastructure for NeuroLink** ## Overview NeuroLink provides enterprise-grade real-time services with WebSocket infrastructure, enhanced chat capabilities, and streaming optimization. These features enable building professional AI applications with real-time bidirectional communication. ## Key Features - ** WebSocket Infrastructure** - Professional-grade server with connection management [Content truncated - see llms-full.txt for complete documentation] --- ## Regional Streaming Controls # Regional Streaming Controls Latency, compliance, and model availability often depend on which region you call. NeuroLink 7.45.0 threads the `region` parameter through the generate/stream stack so you can target specific data centres when working with providers that expose regional endpoints. ## Supported Providers | Provider | How to Set Region | Defaults | [Content truncated - see llms-full.txt for complete documentation] --- ## Speech-to-Speech Agents: Architecture and Gemini Live Integration Plan # Speech-to-Speech Agents: Architecture and Gemini Live Integration Plan Status: Proposal (Docs only) Owner: NeuroLink Platform Last updated: 2025-09-01 ## Goals - Use `NeuroLink.stream` as the single, unified API for both text and voice streaming (no separate engine entrypoint). - Start with Google Gemini Live API (Studio) as the first realtime provider. - Server-level only: users attach their own WebSocket(s) and forward events; we do not host WS in the SDK. [Content truncated - see llms-full.txt for complete documentation] --- ## Structured Output with Zod Schemas # Structured Output with Zod Schemas Generate type-safe, validated JSON responses using Zod schemas. Available in `generate()` function only (not `stream()`). ## Quick Example ```typescript const neurolink = new NeuroLink(); // Define your schema const UserSchema = z.object({ name: z.string(), age: z.number(), email: z.string(), occupation: z.string(), }); // Generate with schema const result = await neurolink.generate({ input: { [Content truncated - see llms-full.txt for complete documentation] --- ## Extended Thinking Configuration # Extended Thinking Configuration Enable extended thinking/reasoning modes for AI models that support deeper reasoning capabilities. This feature allows models to "think through" complex problems before providing a response. ## Overview NeuroLink supports extended thinking/reasoning configuration for models that provide this capability. [Content truncated - see llms-full.txt for complete documentation] --- ## Text-to-Speech (TTS) Integration Guide # Text-to-Speech (TTS) Integration Guide NeuroLink provides integrated Text-to-Speech (TTS) capabilities, allowing you to generate high-quality audio from text prompts or AI-generated responses. This feature is perfect for voice assistants, accessibility features, narration, podcasts, and more. ## Overview **Key Features:** - **High-quality voices** - Neural, Wavenet, and Standard voice types - **Multiple languages** - 50+ voices across 10+ languages [Content truncated - see llms-full.txt for complete documentation] --- ## Video Analysis # Video Analysis Comprehensive video analysis for NeuroLink, powered by Gemini 2.0 Flash. This feature goes beyond basic visual description—it provides a deep logical audit of video sequences to understand "why" and "how" events occur. ## Key Capabilities - **Logical Analysis**: Dissect any video to extract the underlying intent, cause-and-effect, and logical progression. - **Action-Reaction Chain**: A step-by-step audit of user or system actions and their immediate visual results. [Content truncated - see llms-full.txt for complete documentation] --- ## Video Generation with Veo 3.1 # Video Generation with Veo 3.1 NeuroLink integrates Google's Veo 3.1 model to enable AI-powered video generation with audio from image and text prompt inputs. Transform static images into dynamic, professional-quality video content with synchronized audio. ## Overview Video generation in NeuroLink leverages Google's state-of-the-art Veo 3.1 model through Vertex AI. The system uses the existing `generate()` function with video-specific options: 1. **Accepts** an input image via `input. [Content truncated - see llms-full.txt for complete documentation] --- # Examples ## Examples & Tutorials # Examples & Tutorials Learn NeuroLink through practical examples and step-by-step tutorials for real-world applications. ## What You'll Find Here This section contains practical implementations, use cases, and tutorials to help you integrate NeuroLink into your projects effectively. - **[Basic Usage](/docs/examples/basic-usage)** Fundamental examples for both CLI and SDK usage, covering core functionality and common patterns. - ⭐ **[Advanced Examples](/docs/advanced)** [Content truncated - see llms-full.txt for complete documentation] --- ## Advanced Examples # Advanced Examples Complex integration patterns, enterprise workflows, and sophisticated use cases for NeuroLink. ## ️ Enterprise Architecture ### Multi-Provider Load Balancing ```typescript class LoadBalancedNeuroLink { private instances: Map; private usage: Map; private limits: Map; constructor() { this.instances = new Map([ ["openai", new NeuroLink({ defaultProvider: "openai" })], ["google-ai", new NeuroLink({ defaultProvider: "google-ai" })], [Content truncated - see llms-full.txt for complete documentation] --- ## Basic Usage Examples # Basic Usage Examples Simple examples to get started with NeuroLink in different scenarios and programming languages. **Prerequisites**: Before running these examples, ensure you have configured at least one AI provider. See [Provider Configuration Guide](/docs/getting-started/provider-setup) for setup instructions. ## Quick Start Examples ### Simple Text Generation ```typescript const neurolink = new NeuroLink(); // Basic text generation const result = await neurolink.generate({ [Content truncated - see llms-full.txt for complete documentation] --- ## Business Applications # Business Applications Enterprise-focused examples demonstrating NeuroLink's value in business environments, ROI optimization, and organizational workflows. ## Executive Decision Support ### Strategic Planning Assistant **Scenario**: C-level executives need AI-powered insights for strategic decisions. ```typescript class StrategyAssistant { private neurolink: NeuroLink; constructor() { this.neurolink = new NeuroLink({ analytics: { enabled: true }, }); } [Content truncated - see llms-full.txt for complete documentation] --- ## Tool Blocking Feature Example # Tool Blocking Feature Example This example demonstrates how to use the `blockedTools` feature to prevent specific tools from being executed on external MCP servers. ## Example Configuration Create or update your `.mcp-config.json` file: ```json { "mcpServers": { "filesystem": { "name": "filesystem", "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "."], "transport": "stdio", [Content truncated - see llms-full.txt for complete documentation] --- ## Use Cases & Applications # Use Cases & Applications Real-world scenarios and practical applications where NeuroLink adds value across different industries and roles. ## ‍ Software Development ### Code Generation & Review **Scenario**: Development team needs to accelerate coding and improve quality. ```typescript class DeveloperAssistant { private neurolink: NeuroLink; constructor() { this.neurolink = new NeuroLink(); } async generateCode( requirement: string, language: string, [Content truncated - see llms-full.txt for complete documentation] --- # Additional Documentation This is a summary version. For complete documentation, see: - Full text: https://docs.neurolink.ink/llms-full.txt - Web docs: https://docs.neurolink.ink - GitHub: https://github.com/juspay/neurolink