📊 Enterprise Telemetry Guide
Advanced OpenTelemetry Integration for NeuroLink
📋 Overview
NeuroLink includes optional OpenTelemetry integration for enterprise monitoring and observability. The telemetry system provides comprehensive insights into AI operations, performance metrics, and system health with zero overhead when disabled.
🚀 Key Features
- ✅ Zero Overhead by Default - Telemetry disabled unless explicitly configured
- 🤖 AI Operation Tracking - Monitor text generation, token usage, costs, and response times
- 🔧 MCP Tool Monitoring - Track tool calls, execution time, and success rates
- 📈 Performance Metrics - Response times, error rates, throughput monitoring
- 🔍 Distributed Tracing - Full request tracing across AI providers and services
- 📊 Custom Dashboards - Grafana, Jaeger, and Prometheus integration
- 🎯 Production Ready - Enterprise-grade monitoring for production deployments
🎯 Langfuse Integration
NeuroLink provides native integration with Langfuse for LLM-specific observability.
Quick Setup
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink({
observability: {
langfuse: {
enabled: true,
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: "https://cloud.langfuse.com", // or self-hosted URL
environment: "production",
release: "1.0.0",
},
},
});
Context Enrichment
Add user, session, and custom metadata to your traces:
import { setLangfuseContext, getLangfuseContext } from "@juspay/neurolink";
// Set context with all available fields
await setLangfuseContext({
userId: "user-123",
sessionId: "session-456",
conversationId: "conv-789",
requestId: "req-abc",
traceName: "customer-support-chat",
metadata: {
feature: "support",
tier: "premium",
region: "us-east-1",
},
});
// Read current context
const context = getLangfuseContext();
console.log(context?.conversationId);
Custom Spans
Create your own spans for detailed tracing:
import { getTracer, setLangfuseContext } from "@juspay/neurolink";
const tracer = getTracer("my-app");
await setLangfuseContext({ userId: "user-123" }, async () => {
const span = tracer.startSpan("process-request");
try {
const result = await neurolink.generate("Hello");
span.setAttribute("tokens.total", result.usage?.totalTokens);
return result;
} finally {
span.end();
}
});
External TracerProvider Mode
If your application already has OpenTelemetry instrumentation, use external provider mode:
import { NeuroLink, getSpanProcessors } from "@juspay/neurolink";
import { NodeSDK } from "@opentelemetry/sdk-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
// Initialize NeuroLink without creating its own TracerProvider
const neurolink = new NeuroLink({
observability: {
langfuse: {
enabled: true,
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
useExternalTracerProvider: true, // Don't create TracerProvider
},
},
});
// Create your existing exporter wrapped in BatchSpanProcessor
const exporter = new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces",
});
// Add NeuroLink's processors to your existing OTEL setup
const sdk = new NodeSDK({
spanProcessors: [
new BatchSpanProcessor(exporter),
...getSpanProcessors(), // [ContextEnricher, LangfuseSpanProcessor]
],
});
sdk.start();
Vercel AI SDK Integration
NeuroLink automatically captures GenAI semantic convention attributes from Vercel AI SDK:
import { generateText } from "ai";
import { setLangfuseContext } from "@juspay/neurolink";
await setLangfuseContext({ userId: "user-123" }, async () => {
const result = await generateText({
model: openai("gpt-4"),
prompt: "Hello",
experimental_telemetry: { isEnabled: true },
});
// Token usage and model info automatically captured
});
🔧 Basic Setup
Environment Configuration
# Enable telemetry
NEUROLINK_TELEMETRY_ENABLED=true
# OpenTelemetry endpoint (Jaeger, OTLP collector, etc.)
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
# Service identification
OTEL_SERVICE_NAME=my-ai-application
OTEL_SERVICE_VERSION=1.0.0
# Optional: Resource attributes
OTEL_RESOURCE_ATTRIBUTES="service.name=my-ai-app,service.version=1.0.0,deployment.environment=production"
# Optional: Sampling configuration
OTEL_TRACES_SAMPLER=traceidratio
OTEL_TRACES_SAMPLER_ARG=0.1 # Sample 10% of traces
Programmatic Initialization
import { initializeTelemetry, getTelemetryStatus } from "@juspay/neurolink";
// Configuration is done via environment variables:
// NEUROLINK_TELEMETRY_ENABLED=true
// OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
// OTEL_SERVICE_NAME=my-ai-application
// OTEL_SERVICE_VERSION=1.0.0
// Initialize telemetry (reads from environment variables)
const success = await initializeTelemetry();
// Returns: Promise<boolean>
if (success) {
console.log("Telemetry initialized successfully");
}
// Check telemetry status
const status = await getTelemetryStatus();
// Returns: { enabled: boolean, initialized: boolean, endpoint?: string, service?: string, version?: string }
console.log("Telemetry enabled:", status.enabled);
console.log("Endpoint:", status.endpoint);
Environment Variables
| Variable | Description | Default |
|---|---|---|
NEUROLINK_TELEMETRY_ENABLED | Enable/disable telemetry | false |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint URL | - |
OTEL_SERVICE_NAME | Service name | neurolink-ai |
OTEL_SERVICE_VERSION | Service version | 3.0.1 |
🐳 Production Deployment
Docker Compose with Jaeger
# docker-compose.yml
version: "3.8"
services:
my-ai-app:
build: .
environment:
- NEUROLINK_TELEMETRY_ENABLED=true
- OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:14268/api/traces
- OTEL_SERVICE_NAME=my-ai-application
- OPENAI_API_KEY=${OPENAI_API_KEY}
depends_on:
- jaeger
ports:
- "3000:3000"
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686" # Jaeger UI
- "14268:14268" # OTLP HTTP
- "14250:14250" # OTLP gRPC
environment:
- COLLECTOR_OTLP_ENABLED=true
- LOG_LEVEL=debug
# Optional: Prometheus for metrics
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
# Optional: Grafana for dashboards
grafana:
image: grafana/grafana:latest
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana-storage:/var/lib/grafana
volumes:
grafana-storage:
📊 Key Metrics to Track
AI Operation Metrics
- Response Time: Time to generate AI responses
- Token Usage: Input/output tokens by provider and model
- Cost Tracking: Estimated costs per operation
- Error Rates: Failed AI requests by provider
- Provider Performance: Success rates and latency by provider
Sample Prometheus Queries
# Average AI response time over 5 minutes
rate(neurolink_ai_duration_sum[5m]) / rate(neurolink_ai_duration_count[5m])
# Token usage by provider
sum by (provider) (rate(neurolink_tokens_total[5m]))
# Error rate percentage
rate(neurolink_errors_total[5m]) / rate(neurolink_requests_total[5m]) * 100
# Cost per hour by provider
sum by (provider) (rate(neurolink_cost_total[1h]))
# Active WebSocket connections
neurolink_websocket_connections_active
🚀 Getting Started Checklist
✅ Quick Setup (5 minutes)
-
Enable Telemetry
export NEUROLINK_TELEMETRY_ENABLED=true
export OTEL_SERVICE_NAME=my-ai-app -
Start Jaeger (Local Development)
docker run -d \
-p 16686:16686 \
-p 14268:14268 \
jaegertracing/all-in-one:latest -
Configure Endpoint
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:14268/api/traces -
Initialize in Code
import { initializeTelemetry } from "@juspay/neurolink";
await initializeTelemetry(); -
View Traces
- Open http://localhost:16686
- Generate some AI requests
- Search for traces in Jaeger UI
📚 Additional Resources
- API Reference - Complete telemetry API documentation
- Real-time Services - WebSocket infrastructure guide
- Performance Optimization - Optimization strategies
Ready for enterprise-grade AI monitoring with NeuroLink! 📊