Image Generation Streaming Guide
Overview
NeuroLink supports image generation through AI models like Google Vertex AI's gemini-3-pro-image-preview and gemini-2.5-flash-image. This guide explains how image generation works in both generate() and stream() modes, including CLI usage with automatic file saving, technical architecture, and usage examples.
Table of Contents
- Architecture Overview
- Streaming Modes
- Image Generation Flow
- Usage Examples
- Implementation Details
- Troubleshooting
Architecture Overview
Key Components
┌─────────────────────────────────────────────────────────────┐
│ NeuroLink Client │
│ (neurolink.generate() or neurolink.stream()) │
└────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ BaseProvider.stream() │
│ • Detects if model is image generation model │
│ • Routes to fake streaming for image models │
│ • Routes to real streaming for text models │
└────────────────────────┬────────────────────────────────────┘
│
┌───────────────┴───────────────┐
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────────┐
│ Fake Streaming │ │ Real Streaming │
│ (Image Models) │ │ (Text Models) │
└────────┬─────────┘ └──────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Provider.executeImageGeneration() │
│ • Calls AI provider API with image generation config │
│ • Returns complete image as base64 │
└────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Yield Image Chunk │
│ { type: "image", imageOutput: { base64: "..." } } │
└─────────────────────────────────────────────────────────────┘
Image Generation Models
The following models are configured for image generation:
// Supported image generation models (Vertex AI and Google AI Studio)
const IMAGE_MODELS = [
"gemini-2.5-flash-image", // GA - Fast image generation (Nano Banana)
"gemini-3-pro-image-preview", // Preview - High-quality with 4K support (Nano Banana Pro)
];
Important Notes:
- Image generation is supported on Google Vertex AI and Google AI Studio providers
- The
gemini-3-pro-image-previewmodel requireslocation: "global"configuration on Vertex AI - Other models can use regional endpoints like
us-east5on Vertex AI - Images are returned as base64-encoded PNG data
Streaming Modes
Real Streaming vs Fake Streaming
NeuroLink uses two different streaming approaches depending on the model capabilities:
Real Streaming (Text Models)
- Uses Vercel AI SDK's native
streamText()function - Streams tokens as they are generated by the AI model
- Provides true real-time streaming experience
- Used for: GPT-4, Claude, Gemini (text), etc.
Fake Streaming (Image Models)
- Calls
generate()internally to get complete result - Yields the result progressively to simulate streaming
- Required because image generation models don't support token-by-token streaming
- Used for:
gemini-2.5-flash-image,gemini-3-pro-image-preview, etc.
Why Fake Streaming?
Image generation models produce complete images, not incremental tokens. The fake streaming approach:
- Maintains API Consistency: Same
stream()interface for all models - Preserves User Experience: Clients can use the same code pattern
- Enables Progressive Enhancement: Can yield text chunks before final image
- Supports Analytics: Tracks generation time and token usage
Image Generation Flow
Step-by-Step Process
1. Client calls neurolink.stream()
↓
2. BaseProvider.stream() detects image model
↓
3. Routes to executeFakeStreaming()
↓
4. Calls this.generate() internally
↓
5. Provider.executeImageGeneration() is invoked
↓
6. AI API generates complete image
↓
7. Image returned as base64 string
↓
8. enhanceResult() preserves imageOutput field
↓
9. executeFakeStreaming() yields text chunks (if any)
↓
10. executeFakeStreaming() yields image chunk
{ type: "image", imageOutput: { base64: "..." } }
↓
11. Client receives and processes image chunk
Code Flow in BaseProvider
// src/lib/core/baseProvider.ts
async stream(options: StreamOptions): Promise<StreamResult> {
// Step 1: Detect if this is an image generation model
const isImageModel = IMAGE_GENERATION_MODELS.some((m) =>
this.modelName.includes(m),
);
// Step 2: Route to fake streaming for image models
if (isImageModel) {
return await this.executeFakeStreaming(options, analysisSchema);
}
// Step 3: Use real streaming for text models
return await this.executeRealStreaming(options, analysisSchema);
}
private async executeFakeStreaming(
options: StreamOptions,
analysisSchema?: z.ZodSchema,
): Promise<StreamResult> {
// Step 4: Call generate() to get complete result
const result = await this.generate({
prompt: options.prompt,
// ... other options
});
// Step 5: Create async generator to yield chunks
const stream = async function* () {
// Yield text chunks if present
if (result.text) {
const words = result.text.split(" ");
for (const word of words) {
yield { content: word + " " };
await new Promise((resolve) => setTimeout(resolve, 50));
}
}
// Step 6: Yield image chunk if present
if (result?.imageOutput) {
yield {
type: "image" as const,
imageOutput: result.imageOutput,
};
}
};
return {
stream: stream(),
analytics: result.analytics,
evaluation: result.evaluation,
};
}
Usage Examples
Example 1: Basic Image Generation with generate()
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Generate an image using Vertex AI
const result = await neurolink.generate({
input: { text: "A serene mountain landscape at sunset" },
provider: "vertex",
model: "gemini-3-pro-image-preview",
});
// Access the generated image
if (result.imageOutput) {
const base64Image = result.imageOutput.base64;
console.log(`Image generated: ${base64Image.length} characters`);
// Save to file
const imageBuffer = Buffer.from(base64Image, "base64");
fs.writeFileSync("mountain.png", imageBuffer);
console.log("✅ Image saved to mountain.png");
}
// Result also contains descriptive text
console.log("Content:", result.content);
// Output: "Generated image using gemini-3-pro-image-preview (image/png)"
// Access analytics (if enabled)
if (result.analytics) {
console.log(`Generation time: ${result.analytics.responseTime}ms`);
console.log(`Tokens used: ${result.analytics.usage.total}`);
}
Example 2: Image Generation with Streaming
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Stream image generation (uses fake streaming for image models)
const result = await neurolink.stream({
input: { text: "A futuristic city with flying cars" },
provider: "vertex",
model: "gemini-2.5-flash-image",
});
// Process stream chunks
for await (const chunk of result.stream) {
if ("content" in chunk) {
// Text chunk (description or metadata)
process.stdout.write(chunk.content);
} else if (chunk.type === "image") {
// Image chunk - yielded after text chunks complete
console.log("\n✅ Image received!");
const base64Image = chunk.imageOutput.base64;
// Save the image
const imageBuffer = Buffer.from(base64Image, "base64");
fs.writeFileSync("futuristic-city.png", imageBuffer);
console.log(`Image size: ${imageBuffer.length} bytes`);
console.log(`Saved to: futuristic-city.png`);
}
}
// Access analytics after streaming completes
if (result.analytics) {
console.log(`\nTotal generation time: ${result.analytics.responseTime}ms`);
}
Note: Image generation uses "fake streaming" - the complete image is generated first, then yielded as a single chunk. This maintains API consistency with text streaming.
Example 3: CLI Usage
# Basic image generation (saves to default path: generated-images/image-<timestamp>.png)
npx neurolink generate "A beautiful sunset over the ocean" \
--provider vertex \
--model gemini-3-pro-image-preview
# Output:
# 📸 Generated image saved to: generated-images/image-2025-12-16T11-50-42-209Z.png
# Image size: 1856.34 KB
# Generated image using gemini-3-pro-image-preview (image/png)
# Generate with custom output path
npx neurolink generate "Mountain landscape at sunset" \
--provider vertex \
--model gemini-2.5-flash-image \
--imageOutput ./my-images/mountain.png
# Output:
# 📸 Generated image saved to: ./my-images/mountain.png
# Image size: 2048.67 KB
# Generated image using gemini-2.5-flash-image (image/png)
# Generate with analytics
npx neurolink generate "Futuristic city with flying cars" \
--provider vertex \
--model gemini-2.5-flash-image \
--imageOutput ./images/city.png \
--enable-analytics
# Use different models
npx neurolink generate "Serene forest scene" \
--provider vertex \
--model gemini-3-pro-image-preview # Best quality, requires 'global' location
npx neurolink generate "Quick sketch of a cat" \
--provider vertex \
--model gemini-2.5-flash-image # Faster generation
CLI Options:
--imageOutput <path>: Custom path for generated image (default:generated-images/image-<timestamp>.png)--provider vertexor--provider google-ai: Both Vertex AI and Google AI Studio support image generation--model <model-name>: Image generation model to use--enable-analytics: Include generation metrics
Example 4: Detecting Image Chunks in Stream
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
const result = await neurolink.stream({
input: { text: "A magical forest with glowing mushrooms" },
provider: "vertex",
model: "gemini-2.5-flash-image",
});
let textContent = "";
let imageData: string | null = null;
for await (const chunk of result.stream) {
// Type guard for text chunks
if ("content" in chunk) {
textContent += chunk.content;
}
// Type guard for image chunks
if ("type" in chunk && chunk.type === "image") {
imageData = chunk.imageOutput.base64;
console.log("Image chunk received!");
}
}
console.log("Text description:", textContent);
console.log("Image available:", !!imageData);
if (imageData) {
// Process the image
const imageBuffer = Buffer.from(imageData, "base64");
fs.writeFileSync("magical-forest.png", imageBuffer);
console.log(`✅ Saved ${(imageBuffer.length / 1024).toFixed(2)} KB image`);
}
Example 5: Error Handling
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
try {
const result = await neurolink.stream({
input: { text: "A dragon flying over mountains" },
provider: "vertex",
model: "gemini-3-pro-image-preview",
});
let imageReceived = false;
for await (const chunk of result.stream) {
if ("type" in chunk && chunk.type === "image") {
imageReceived = true;
const base64Image = chunk.imageOutput.base64;
// Validate image data
if (!base64Image || base64Image.length === 0) {
throw new Error("Empty image data received");
}
// Validate base64 format
// Validate base64 format (padding '=' only at end, max 2 chars)
if (!/^[A-Za-z0-9+/]*={0,2}$/.test(base64Image)) {
throw new Error("Invalid base64 format");
}
// Save image
const imageBuffer = Buffer.from(base64Image, "base64");
// Validate minimum size (1KB)
if (imageBuffer.length < 1024) {
throw new Error("Image data too small");
}
fs.writeFileSync("dragon.png", imageBuffer);
console.log(
`✅ Image saved successfully (${(imageBuffer.length / 1024).toFixed(2)} KB)`,
);
}
}
if (!imageReceived) {
console.warn("⚠️ No image was generated");
}
} catch (error) {
console.error("❌ Image generation failed:", error.message);
// Handle specific error cases
if (error.message.includes("credentials")) {
console.error("Please check your GOOGLE_APPLICATION_CREDENTIALS");
} else if (error.message.includes("quota")) {
console.error("API quota exceeded");
} else if (error.message.includes("not found")) {
console.error("Model not available in selected region");
}
}
Example 6: Web Application Integration
// Express.js endpoint for image generation
import express from "express";
import { NeuroLink } from "@juspay/neurolink";
const app = express();
app.use(express.json());
app.post("/api/generate-image", async (req, res) => {
const { prompt } = req.body;
if (!prompt) {
return res.status(400).json({ error: "Prompt is required" });
}
const neurolink = new NeuroLink();
try {
const result = await neurolink.stream({
input: { text: prompt },
provider: "vertex",
model: "gemini-2.5-flash-image",
});
// Set headers for streaming response
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
for await (const chunk of result.stream) {
if ("content" in chunk) {
// Send text chunks as SSE
res.write(
`data: ${JSON.stringify({ type: "text", content: chunk.content })}\n\n`,
);
} else if (chunk.type === "image") {
// Send image chunk as SSE
res.write(
`data: ${JSON.stringify({
type: "image",
base64: chunk.imageOutput.base64,
size: chunk.imageOutput.base64.length,
})}\n\n`,
);
}
}
res.write("data: [DONE]\n\n");
res.end();
} catch (error) {
res.status(500).json({
error: error.message,
details: "Image generation failed",
});
}
});
// REST endpoint (non-streaming)
app.post("/api/generate-image-sync", async (req, res) => {
const { prompt } = req.body;
const neurolink = new NeuroLink();
try {
const result = await neurolink.generate({
input: { text: prompt },
provider: "vertex",
model: "gemini-2.5-flash-image",
});
if (result.imageOutput) {
res.json({
success: true,
base64: result.imageOutput.base64,
content: result.content,
size: Buffer.from(result.imageOutput.base64, "base64").length,
});
} else {
res.status(500).json({ error: "No image generated" });
}
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.listen(3000, () => {
console.log("Server running on http://localhost:3000");
});
Implementation Details
Provider-Specific Implementation
Vertex AI provider implements image generation through the REST API:
// src/lib/providers/googleVertex.ts
async executeImageGeneration(
options: TextGenerationOptions,
): Promise<EnhancedGenerateResult> {
const { GoogleAuth } = await import("google-auth-library");
// Authenticate with Google Cloud
const auth = new GoogleAuth({
scopes: ["https://www.googleapis.com/auth/cloud-platform"],
});
const client = await auth.getClient();
const accessToken = await client.getAccessToken();
// Determine location based on model
const location = this.modelName.includes("gemini-3-pro-image")
? "global" // gemini-3-pro-image-preview requires global
: this.location; // Other models can use regional endpoints
// Build request with response modalities for image generation
const requestBody = {
contents: [{
role: "user",
parts: [{ text: options.prompt }],
}],
generation_config: {
response_modalities: ["TEXT", "IMAGE"], // CRITICAL for image generation
temperature: options.temperature || 0.7,
candidate_count: 1,
},
};
// Call Vertex AI API
const url = `https://${location}-aiplatform.googleapis.com/v1/projects/${this.projectId}/locations/${location}/publishers/google/models/${this.modelName}:generateContent`;
const response = await fetch(url, {
method: "POST",
headers: {
Authorization: `Bearer ${accessToken.token}`,
"Content-Type": "application/json",
},
body: JSON.stringify(requestBody),
});
const data = await response.json();
// Extract image from response
const candidate = data.candidates?.[0];
const imagePart = candidate?.content?.parts?.find(
(part) =>
(part.inlineData || part.inline_data) &&
((part.inlineData?.mimeType || part.inline_data?.mime_type)?.startsWith("image/"))
);
if (!imagePart) {
throw new Error("No image generated in response");
}
// Extract base64 data (handle both camelCase and snake_case)
const imageData = imagePart.inlineData?.data || imagePart.inline_data?.data;
const mimeType = imagePart.inlineData?.mimeType || imagePart.inline_data?.mime_type || "image/png";
// Return result with imageOutput
const result: EnhancedGenerateResult = {
content: `Generated image using ${this.modelName} (${mimeType})`,
imageOutput: {
base64: imageData,
},
provider: this.providerName,
model: this.modelName,
usage: {
input: this.estimateTokenCount(options.prompt),
output: 0,
total: this.estimateTokenCount(options.prompt),
},
};
// Enhance with analytics/evaluation if enabled
return await this.enhanceResult(result, options, startTime);
}
Key Implementation Details:
- Authentication: Uses Google Cloud service account credentials
- Location Handling: Automatically selects
globalforgemini-3-pro-image-preview - Response Modalities: Sets
["TEXT", "IMAGE"]to enable image generation - Base64 Extraction: Handles both
inlineDataandinline_dataformats - Result Enhancement: Preserves
imageOutputthrough analytics pipeline
Type Definitions
// src/lib/types/streamTypes.ts
export type StreamResult = {
stream: AsyncIterable<
| { content: string }
| { type: "audio"; audio: AudioChunk }
| { type: "image"; imageOutput: { base64: string } }
>;
// Provider information
provider?: string;
model?: string;
// Usage information
usage?: TokenUsage;
finishReason?: string;
// Tool integration
toolCalls?: ToolCall[];
toolResults?: ToolResult[];
toolsUsed?: string[];
// Stream metadata
metadata?: {
streamId?: string;
startTime?: number;
totalChunks?: number;
responseTime?: number;
};
// Analytics and evaluation (available after stream completion)
analytics?: AnalyticsData | Promise<AnalyticsData>;
evaluation?: EvaluationData | Promise<EvaluationData>;
};
// src/lib/types/generateTypes.ts
export type GenerateResult = {
content: string;
outputs?: { text: string }; // Future extensible for multi-modal
audio?: TTSResult;
imageOutput?: { base64: string } | null;
// Provider information
provider?: string;
model?: string;
// Usage and performance
usage?: TokenUsage;
responseTime?: number;
// Tool integration
toolCalls?: Array<{ toolCallId: string; toolName: string; args: object }>;
toolResults?: unknown[];
toolsUsed?: string[];
enhancedWithTools?: boolean;
// Analytics and evaluation
analytics?: AnalyticsData;
evaluation?: EvaluationData;
};
// Note: CLI adds savedPath to imageOutput when saving images locally
// CLI-specific type (not part of core SDK):
// imageOutput?: { base64: string; savedPath?: string } | null;
// EnhancedGenerateResult extends GenerateResult with optional analytics/evaluation
export type EnhancedGenerateResult = GenerateResult & {
analytics?: AnalyticsData;
evaluation?: EvaluationData;
};
// CLI-specific types
export type GenerateCommandArgs = {
input: string;
provider?: string;
model?: string;
imageOutput?: string; // Custom path for generated images
// ... other options
};
Analytics Integration
The enhanceResult() method in BaseProvider preserves the imageOutput field while adding analytics:
// src/lib/core/baseProvider.ts
protected async enhanceResult(
result: EnhancedGenerateResult,
options: TextGenerationOptions,
startTime: number,
): Promise<EnhancedGenerateResult> {
const responseTime = Date.now() - startTime;
// CRITICAL: Store imageOutput separately to ensure preservation
const imageOutput = result.imageOutput;
let enhancedResult = { ...result };
// Add analytics if enabled
if (options.enableAnalytics) {
try {
const analytics = await this.createAnalytics(result, responseTime, options);
// Preserve ALL fields including imageOutput when adding analytics
enhancedResult = { ...enhancedResult, analytics, imageOutput };
} catch (error) {
logger.warn(`Analytics creation failed: ${error.message}`);
}
}
// Add evaluation if enabled
if (options.enableEvaluation) {
try {
const evaluation = await this.createEvaluation(result, options);
// Preserve ALL fields including imageOutput when adding evaluation
enhancedResult = { ...enhancedResult, evaluation, imageOutput };
} catch (error) {
logger.warn(`Evaluation creation failed: ${error.message}`);
}
}
// CRITICAL FIX: Always restore imageOutput if it existed
if (imageOutput) {
enhancedResult.imageOutput = imageOutput;
}
return enhancedResult;
}
Key Points:
imageOutputis explicitly preserved through analytics/evaluation pipeline- Spread operator ensures all existing fields are maintained
- Double-check restoration at the end prevents accidental loss
Troubleshooting
Common Issues
1. No Image Chunk Received
Symptom: Stream completes but no image chunk is yielded.
Possible Causes:
- Model is not an image generation model
- Wrong provider (only Vertex AI supports image generation)
- API credentials are invalid or missing
- Model not available in selected region
Solution:
// Verify you're using Vertex AI provider
const result = await neurolink.generate({
input: { text: "Generate an image of a sunset" },
provider: "vertex", // ✅ Required
model: "gemini-3-pro-image-preview", // ✅ Valid image model
});
// NOT these:
// provider: "openai" // ❌ Doesn't support image generation
// provider: "anthropic" // ❌ Doesn't support image generation
// Note: "google-ai" also supports image generation with gemini-2.5-flash-image
// Verify credentials
console.log(
"GOOGLE_APPLICATION_CREDENTIALS:",
process.env.GOOGLE_APPLICATION_CREDENTIALS,
);
console.log("GOOGLE_VERTEX_PROJECT:", process.env.GOOGLE_VERTEX_PROJECT);
2. Empty Base64 String
Symptom: Image chunk received but base64 field is empty.
Possible Causes:
- API returned error but didn't throw
- Response format changed
- Network issue during transmission
Solution:
for await (const chunk of result.stream) {
if (chunk.type === "image") {
if (!chunk.imageOutput.base64) {
console.error("Empty image data received");
console.error("Full chunk:", JSON.stringify(chunk, null, 2));
} else {
console.log(`Image data length: ${chunk.imageOutput.base64.length}`);
}
}
}
3. Model Not Found Error
Symptom: Error: models/gemini-3-pro-image-preview is not found for API version v1
Cause: gemini-3-pro-image-preview requires location: "global" but a regional endpoint is being used.
Solution:
// The provider automatically handles location selection:
// - gemini-3-pro-image-preview → uses "global"
// - Other models → uses configured region (e.g., "us-east5")
// Set region in environment variable
process.env.GOOGLE_VERTEX_LOCATION = "us-east5"; // For non-preview models
// Or pass in options
const result = await neurolink.generate({
input: { text: "Generate image" },
provider: "vertex",
model: "gemini-2.5-flash-image", // Uses regional endpoint
region: "us-east5",
});
4. Large Image Timeout
Symptom: Generation times out for large/complex images.
Solution:
const result = await neurolink.stream({
input: { text: "A detailed cityscape with many buildings" },
provider: "vertex",
model: "gemini-2.5-flash-image",
timeout: 60000, // Increase timeout to 60 seconds
});
5. CLI Image Not Saved
Symptom: CLI shows success but no file created.
Possible Causes:
imageOutputoption not passed toprocessOptions()- Directory permissions issue
- Disk space full
Solution:
# Check default location
ls -lh generated-images/
# Use custom path with explicit directory
npx neurolink generate "test" \
--provider vertex \
--model gemini-2.5-flash-image \
--imageOutput ./my-images/test.png
# Check file was created
ls -lh ./my-images/test.png
# Verify directory permissions
ls -ld generated-images/
Debug Mode
Enable debug logging to troubleshoot issues:
# Set environment variable
export DEBUG=neurolink:*
# Or use CLI flag
npx neurolink generate "test image" \
--provider vertex \
--model gemini-2.5-flash-image \
--debug
# Debug output will show:
# - Provider selection
# - Model configuration
# - API request details
# - Response parsing
# - Image data extraction
Testing Image Generation
Quick test to verify image generation works:
# Test with default path
npx neurolink generate "A simple red circle" \
--provider vertex \
--model gemini-2.5-flash-image
# Expected output:
# 📸 Generated image saved to: generated-images/image-2025-12-16T11-50-42-209Z.png
# Image size: 234.56 KB
# Generated image using gemini-2.5-flash-image (image/png)
# Verify file exists
ls -lh generated-images/image-*.png | tail -1
# Test with custom path
npx neurolink generate "A simple blue square" \
--provider vertex \
--model gemini-2.5-flash-image \
--imageOutput ./test-output/square.png
# Expected output:
# 📸 Generated image saved to: ./test-output/square.png
# Image size: 198.34 KB
# Generated image using gemini-2.5-flash-image (image/png)
# Verify file
file ./test-output/square.png
# Output: ./test-output/square.png: PNG image data, 1024 x 1024, 8-bit/color RGB
Best Practices
1. Always Check for Image Chunks
let hasImage = false;
for await (const chunk of result.stream) {
if ("type" in chunk && chunk.type === "image") {
hasImage = true;
// Process image
}
}
if (!hasImage) {
console.warn("No image was generated");
}
2. Validate Base64 Data
if (chunk.type === "image") {
const base64 = chunk.imageOutput.base64;
// Validate it's valid base64 (padding '=' only at end, max 2 chars)
if (!/^[A-Za-z0-9+/]*={0,2}$/.test(base64)) {
throw new Error("Invalid base64 data");
}
// Validate minimum size (e.g., 1KB)
if (base64.length < 1000) {
throw new Error("Image data too small");
}
}
3. Handle Both Text and Image
let description = "";
let imageData: string | null = null;
for await (const chunk of result.stream) {
if ("content" in chunk) {
description += chunk.content;
} else if (chunk.type === "image") {
imageData = chunk.imageOutput.base64;
}
}
// Use both description and image
console.log("Description:", description);
if (imageData) {
saveImage(imageData);
}
4. Use Analytics for Monitoring
const result = await neurolink.stream({
prompt: "Generate image",
enableAnalytics: true,
});
// Monitor generation performance
if (result.analytics) {
console.log(`Generation time: ${result.analytics.responseTime}ms`);
console.log(`Cost: $${result.analytics.cost}`);
// Alert if too slow
if (result.analytics.responseTime > 30000) {
console.warn("Image generation took longer than 30 seconds");
}
}
Conclusion
NeuroLink's image generation streaming provides a unified interface for both text and image generation. The fake streaming approach ensures consistency while maintaining the benefits of streaming APIs. By following the patterns and examples in this guide, you can effectively integrate image generation into your applications.
For more information: