
"Pika 2.2 API Integration Guide: Build Video Generation Pipelines in 2026"
Pika 2.2 API Integration Guide: Build Video Generation Pipelines in 2026#
Pika 2.2 brought a wave of new features — scene effects, improved motion control, and better API stability. If you're building video generation into your product, here's how to integrate Pika's API properly, handle edge cases, and set up fallbacks for production reliability.
What's New in Pika 2.2#
Pika 2.2 introduced several features that matter for API users:
- Scene Effects — Apply cinematic effects (explosion, melt, inflate, crush) to objects in video
- Improved Motion Control — Better adherence to motion prompts, less random drift
- Faster Generation — 30-40% speed improvement over Pika 2.1
- Sound Effects — Auto-generated SFX matching video content
- Better Consistency — Reduced flickering and temporal artifacts
- Extended Duration — Up to 10 seconds per generation (up from 4)
Pika 2.2 API Basics#
Authentication#
import requests
PIKA_API_KEY = "your-pika-api-key"
BASE_URL = "https://api.pika.art/v1"
headers = {
"Authorization": f"Bearer {PIKA_API_KEY}",
"Content-Type": "application/json"
}
Text-to-Video#
def generate_video(prompt: str, duration: int = 4, resolution: str = "720p"):
"""Generate a video from a text prompt"""
response = requests.post(
f"{BASE_URL}/generate",
headers=headers,
json={
"prompt": prompt,
"duration": duration,
"resolution": resolution,
"aspect_ratio": "16:9",
"motion_strength": 3, # 1-5, higher = more motion
"guidance_scale": 12, # How closely to follow prompt
"negative_prompt": "blurry, low quality, distorted faces"
}
)
return response.json()
# Example
result = generate_video(
prompt="A futuristic city at night with flying cars and neon signs, "
"rain falling, cyberpunk atmosphere, cinematic",
duration=8,
resolution="1080p"
)
task_id = result["data"]["task_id"]
Image-to-Video#
import base64
def image_to_video(image_path: str, prompt: str, duration: int = 4):
"""Animate a static image"""
with open(image_path, "rb") as f:
image_b64 = base64.b64encode(f.read()).decode()
response = requests.post(
f"{BASE_URL}/generate/image-to-video",
headers=headers,
json={
"image": f"data:image/jpeg;base64,{image_b64}",
"prompt": prompt,
"duration": duration,
"motion_strength": 2,
"camera_motion": "slow_zoom_in"
}
)
return response.json()
# Animate a product photo
result = image_to_video(
"product_hero.jpg",
"Subtle rotation with floating particles, premium feel",
duration=6
)
Scene Effects (Pika 2.2 Exclusive)#
The standout feature — apply physics-based effects to specific objects:
def apply_scene_effect(image_path: str, effect: str, target: str = None):
"""Apply Pika 2.2 scene effects
Effects: explode, melt, inflate, crush, dissolve,
freeze, burn, grow, shrink, levitate
"""
with open(image_path, "rb") as f:
image_b64 = base64.b64encode(f.read()).decode()
response = requests.post(
f"{BASE_URL}/generate/effects",
headers=headers,
json={
"image": f"data:image/jpeg;base64,{image_b64}",
"effect": effect,
"target_description": target, # e.g., "the red car"
"intensity": 0.7, # 0.0-1.0
"duration": 4
}
)
return response.json()
# Make a building explode
result = apply_scene_effect(
"city_scene.jpg",
effect="explode",
target="the glass skyscraper in the center"
)
# Melt a chocolate bar
result = apply_scene_effect(
"chocolate.jpg",
effect="melt",
target="the chocolate bar"
)
Polling for Results#
import time
def wait_for_video(task_id: str, timeout: int = 300):
"""Poll until video is ready"""
start = time.time()
while time.time() - start < timeout:
response = requests.get(
f"{BASE_URL}/tasks/{task_id}",
headers=headers
)
data = response.json()["data"]
if data["status"] == "completed":
return {
"video_url": data["video_url"],
"thumbnail": data["thumbnail_url"],
"duration": data["duration"],
"cost_credits": data["credits_used"]
}
elif data["status"] == "failed":
raise Exception(f"Generation failed: {data.get('error', 'Unknown')}")
# Adaptive polling: start fast, slow down
elapsed = time.time() - start
sleep_time = 2 if elapsed < 30 else 5 if elapsed < 120 else 10
time.sleep(sleep_time)
raise TimeoutError(f"Video generation timed out after {timeout}s")
# Usage
video = wait_for_video(task_id)
print(f"Video ready: {video['video_url']}")
Pika 2.2 Pricing#
Credit-Based Pricing#
| Plan | Price | Credits/Month | ~Videos (720p, 4s) |
|---|---|---|---|
| Free | $0 | 150 | ~15 |
| Standard | $8/mo | 700 | ~70 |
| Pro | $28/mo | 2,000 | ~200 |
| Unlimited | $58/mo | Unlimited | Unlimited |
Per-Feature Credit Costs#
| Feature | Credits | Approx. Cost (Pro) |
|---|---|---|
| Text-to-Video (4s, 720p) | 10 | $0.14 |
| Text-to-Video (8s, 720p) | 18 | $0.25 |
| Text-to-Video (4s, 1080p) | 15 | $0.21 |
| Text-to-Video (8s, 1080p) | 25 | $0.35 |
| Image-to-Video (4s) | 8 | $0.11 |
| Scene Effects (4s) | 12 | $0.17 |
| Sound Effects | 5 | $0.07 |
Via Crazyrouter#
| Feature | Direct Cost | Crazyrouter Cost | Savings |
|---|---|---|---|
| 720p, 8s video | $0.25 | $0.13-0.18 | ~35-48% |
| 1080p, 8s video | $0.35 | $0.18-0.25 | ~35-49% |
| Scene effect (4s) | $0.17 | $0.09-0.12 | ~35-47% |
Production Pipeline with Multi-Model Fallback#
Don't rely on a single video model. Here's a production-ready pipeline:
import asyncio
from dataclasses import dataclass
from typing import Optional
@dataclass
class VideoResult:
url: str
model: str
cost: float
duration: float
class VideoGenerationPipeline:
"""Multi-model video generation with automatic fallback"""
def __init__(self, crazyrouter_key: str):
self.api_key = crazyrouter_key
self.base_url = "https://crazyrouter.com/v1"
# Model priority: cheapest first, fallback to premium
self.models = [
{"name": "pika-2.2", "max_retries": 2},
{"name": "seedance-2.0", "max_retries": 2},
{"name": "kling-2.1", "max_retries": 1},
{"name": "veo-3", "max_retries": 1}, # Premium fallback
]
async def generate(self, prompt: str, duration: int = 8,
resolution: str = "720p") -> VideoResult:
"""Try each model in order until one succeeds"""
last_error = None
for model_config in self.models:
model = model_config["name"]
retries = model_config["max_retries"]
for attempt in range(retries):
try:
result = await self._call_model(
model, prompt, duration, resolution
)
return result
except RateLimitError:
# Move to next model immediately
break
except Exception as e:
last_error = e
if attempt < retries - 1:
await asyncio.sleep(2 ** attempt)
raise Exception(f"All models failed. Last error: {last_error}")
async def _call_model(self, model: str, prompt: str,
duration: int, resolution: str) -> VideoResult:
"""Call a specific model via Crazyrouter"""
import openai
client = openai.OpenAI(
api_key=self.api_key,
base_url=self.base_url
)
response = client.chat.completions.create(
model=model,
messages=[{
"role": "user",
"content": f"Generate video: {prompt}"
}]
)
return VideoResult(
url=response.choices[0].message.content,
model=model,
cost=0.0, # Tracked by Crazyrouter dashboard
duration=duration
)
# Usage
pipeline = VideoGenerationPipeline("sk-cr-your-key")
# Generates with Pika 2.2 first, falls back automatically
video = asyncio.run(pipeline.generate(
"Product unboxing animation with confetti and sparkles",
duration=8,
resolution="720p"
))
print(f"Generated with {video.model}: {video.url}")
Pika 2.2 vs Competitors for Specific Use Cases#
| Use Case | Best Model | Why |
|---|---|---|
| Scene effects (explode, melt) | Pika 2.2 | Only model with built-in effects |
| Product showcases | Seedance 2.0 | Best image-to-video consistency |
| Cinematic quality | Veo 3 | Highest overall quality |
| Budget bulk generation | Kling 2.1 | Cheapest per video |
| Social media clips | Pika 2.2 | Fast, fun effects, good enough quality |
| Long-form (10s+) | Runway Gen-4 | Best temporal consistency |
Common Integration Pitfalls#
- Not handling async properly — Pika's API is async. Don't block your main thread waiting for results.
- Ignoring negative prompts — Always include
"blurry, distorted, low quality"in negative prompts. - Over-specifying motion — Let the model decide motion for natural results. Only specify camera motion.
- No fallback strategy — Pika has occasional capacity issues. Always have a backup model.
- Forgetting rate limits — Respect the 429 responses. Implement exponential backoff.
FAQ#
What's new in Pika 2.2 compared to 2.1?#
Scene effects (explode, melt, inflate, etc.), 30-40% faster generation, auto sound effects, extended 10-second duration, and improved motion consistency. The scene effects are the headline feature.
How much does Pika 2.2 API cost per video?#
A typical 8-second 720p video costs about 0.13-0.18 through Crazyrouter. The Pro plan at $28/month gives you ~200 videos.
Can I use Pika 2.2 for commercial projects?#
Yes. All paid plans include commercial usage rights. The free tier is for personal/evaluation use only.
Is Pika 2.2 good enough for professional video production?#
For social media, marketing, and web content — absolutely. For broadcast or cinema, you'd want Veo 3 or Sora 2. Pika 2.2's sweet spot is fast, creative content with fun effects.
How do I access Pika 2.2 through Crazyrouter?#
Sign up at crazyrouter.com, get an API key, and use model: "pika-2.2" in your requests. Same OpenAI-compatible format, 35-50% cheaper than direct pricing.
Summary#
Pika 2.2's scene effects make it uniquely useful for creative and marketing content. For production pipelines, pair it with Crazyrouter for cheaper pricing and automatic fallback to Seedance, Kling, or Veo 3 when Pika hits capacity limits. The multi-model approach gives you reliability without overpaying.


