Login
Back to Blog
"Luma Dream Machine API Guide: Build AI Video Apps with Ray 2 in 2026"

"Luma Dream Machine API Guide: Build AI Video Apps with Ray 2 in 2026"

C
Crazyrouter Team
April 13, 2026
0 viewsEnglishTutorial
Share:

Luma Dream Machine API Guide: Build AI Video Apps with Ray 2 in 2026#

Luma's Dream Machine, powered by the Ray 2 model, has carved out a niche in AI video generation: excellent camera control, strong 3D understanding, and competitive pricing. If you're building video features into your app, here's everything you need to integrate it.

What Is Luma Dream Machine?#

Dream Machine is Luma AI's video generation platform. The underlying model, Ray 2, excels at:

  • Camera motion control — Pan, tilt, zoom, orbit, dolly with precision
  • 3D scene understanding — Maintains spatial consistency better than most competitors
  • Image-to-video — Animates photos with natural motion
  • Text-to-video — Generates clips from text descriptions
  • Fast generation — Typically 30-60 seconds for a 5-second clip
  • Up to 1080p resolution at various aspect ratios

Getting Started with the API#

Authentication#

python
import requests

LUMA_API_KEY = "your-luma-api-key"
BASE_URL = "https://api.lumalabs.ai/dream-machine/v1"

headers = {
    "Authorization": f"Bearer {LUMA_API_KEY}",
    "Content-Type": "application/json"
}

Text-to-Video Generation#

python
def create_generation(prompt: str, aspect_ratio: str = "16:9",
                      loop: bool = False):
    """Create a new video generation"""
    response = requests.post(
        f"{BASE_URL}/generations",
        headers=headers,
        json={
            "prompt": prompt,
            "aspect_ratio": aspect_ratio,
            "loop": loop
        }
    )
    response.raise_for_status()
    return response.json()

# Generate a video
gen = create_generation(
    prompt="Aerial drone shot flying over a misty mountain range at sunrise, "
           "golden light breaking through clouds, cinematic",
    aspect_ratio="16:9"
)
generation_id = gen["id"]
print(f"Generation started: {generation_id}")

Image-to-Video#

python
def image_to_video(image_url: str, prompt: str, 
                   aspect_ratio: str = "16:9"):
    """Animate a static image"""
    response = requests.post(
        f"{BASE_URL}/generations",
        headers=headers,
        json={
            "prompt": prompt,
            "keyframes": {
                "frame0": {
                    "type": "image",
                    "url": image_url
                }
            },
            "aspect_ratio": aspect_ratio
        }
    )
    response.raise_for_status()
    return response.json()

# Animate a landscape photo
gen = image_to_video(
    image_url="https://example.com/landscape.jpg",
    prompt="Gentle wind moving through the grass, clouds drifting slowly, "
           "birds flying in the distance"
)

Camera Control — Ray 2's Killer Feature#

This is where Luma shines. Precise camera motion control:

python
# Orbit around a subject
gen = create_generation(
    prompt="A marble statue in a museum, camera slowly orbiting 360 degrees "
           "around the statue, dramatic lighting"
)

# Dolly zoom (Vertigo effect)
gen = create_generation(
    prompt="A person standing in a long hallway, dolly zoom effect, "
           "background stretching while subject stays same size"
)

# Crane shot
gen = create_generation(
    prompt="Starting from ground level looking at flowers, camera cranes up "
           "to reveal a vast mountain landscape behind"
)

# Tracking shot
gen = create_generation(
    prompt="Camera tracking alongside a runner on a beach, steady lateral "
           "movement, golden hour lighting"
)

Camera motion keywords that Ray 2 understands well:

  • orbit, rotate around, 360 degree
  • dolly in, dolly out, push in, pull back
  • pan left, pan right, tilt up, tilt down
  • crane up, crane down, aerial rising
  • tracking shot, follow, steadicam
  • zoom in, zoom out, rack focus
  • static camera, locked off, tripod

Start + End Frame Control#

Define both the first and last frame for precise control:

python
def controlled_generation(start_image: str, end_image: str, 
                         prompt: str):
    """Generate video with defined start and end frames"""
    response = requests.post(
        f"{BASE_URL}/generations",
        headers=headers,
        json={
            "prompt": prompt,
            "keyframes": {
                "frame0": {
                    "type": "image",
                    "url": start_image
                },
                "frame1": {
                    "type": "image",
                    "url": end_image
                }
            }
        }
    )
    return response.json()

# Morph between two product shots
gen = controlled_generation(
    start_image="https://example.com/product_angle1.jpg",
    end_image="https://example.com/product_angle2.jpg",
    prompt="Smooth camera rotation revealing the product from a new angle"
)

Polling and Downloading Results#

python
import time

def wait_and_download(generation_id: str, output_path: str,
                      timeout: int = 300):
    """Wait for generation and download the video"""
    start = time.time()
    
    while time.time() - start < timeout:
        response = requests.get(
            f"{BASE_URL}/generations/{generation_id}",
            headers=headers
        )
        data = response.json()
        
        state = data.get("state")
        
        if state == "completed":
            video_url = data["assets"]["video"]
            
            # Download video
            video_resp = requests.get(video_url)
            with open(output_path, "wb") as f:
                f.write(video_resp.content)
            
            print(f"Downloaded to {output_path}")
            return data
            
        elif state == "failed":
            failure = data.get("failure_reason", "Unknown error")
            raise Exception(f"Generation failed: {failure}")
        
        # Progress logging
        print(f"State: {state} ({int(time.time() - start)}s elapsed)")
        time.sleep(5)
    
    raise TimeoutError("Generation timed out")

# Usage
result = wait_and_download(generation_id, "output_video.mp4")

Node.js Integration#

javascript
const axios = require('axios');
const fs = require('fs');

const LUMA_API_KEY = 'your-luma-api-key';
const BASE_URL = 'https://api.lumalabs.ai/dream-machine/v1';

const headers = {
  'Authorization': `Bearer ${LUMA_API_KEY}`,
  'Content-Type': 'application/json'
};

async function generateVideo(prompt, options = {}) {
  // Create generation
  const { data: gen } = await axios.post(
    `${BASE_URL}/generations`,
    {
      prompt,
      aspect_ratio: options.aspectRatio || '16:9',
      loop: options.loop || false,
      ...(options.keyframes && { keyframes: options.keyframes })
    },
    { headers }
  );
  
  console.log(`Generation started: ${gen.id}`);
  
  // Poll for completion
  while (true) {
    const { data: status } = await axios.get(
      `${BASE_URL}/generations/${gen.id}`,
      { headers }
    );
    
    if (status.state === 'completed') {
      return status.assets.video;
    } else if (status.state === 'failed') {
      throw new Error(`Failed: ${status.failure_reason}`);
    }
    
    await new Promise(r => setTimeout(r, 5000));
  }
}

// Usage
(async () => {
  const videoUrl = await generateVideo(
    'A cozy cabin in the woods during snowfall, ' +
    'warm light from windows, camera slowly pushing in'
  );
  console.log(`Video URL: ${videoUrl}`);
})();

Via Crazyrouter (Unified API)#

Access Luma Ray 2 alongside every other video model through one API:

python
import openai

client = openai.OpenAI(
    api_key="sk-cr-your-key",
    base_url="https://crazyrouter.com/v1"
)

response = client.chat.completions.create(
    model="luma-ray-2",
    messages=[{
        "role": "user",
        "content": "Generate video: Camera orbiting around a crystal vase "
                   "on a pedestal, caustic light patterns, white background"
    }]
)
bash
# cURL via Crazyrouter
curl https://crazyrouter.com/v1/videos/generations \
  -H "Authorization: Bearer sk-cr-your-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "luma-ray-2",
    "prompt": "Timelapse of a flower blooming, macro lens, studio lighting",
    "aspect_ratio": "9:16"
  }'

Pricing Comparison#

Luma Direct Pricing#

PlanPriceCredits/Month~Videos
Free$030/day~30/day (low priority)
Standard$24/mo1,200~120
Pro$64/mo4,000~400
Premier$154/mo12,000~1,200

Per-Video Cost Comparison#

ModelCost per 5s VideoCamera Control3D Quality
Luma Ray 2 (direct)$0.20-0.55★★★★★★★★★★
Luma Ray 2 (Crazyrouter)$0.10-0.28★★★★★★★★★★
Google Veo 3$0.40-0.80★★★★☆★★★★☆
Pika 2.2$0.14-0.35★★★☆☆★★★☆☆
Seedance 2.0$0.15-0.40★★★★☆★★★★☆
Runway Gen-4$0.30-0.70★★★★☆★★★★☆

Luma Ray 2 offers the best camera control in the market. If your use case involves product showcases, real estate tours, or any content requiring precise camera movement, it's the clear winner.

Best Use Cases for Luma Ray 2#

1. Product Videos#

python
# E-commerce product showcase
gen = create_generation(
    prompt="Camera slowly orbiting around a luxury watch on a black velvet "
           "surface, dramatic side lighting, reflections on the crystal face, "
           "shallow depth of field"
)

2. Real Estate Virtual Tours#

python
# Interior walkthrough
gen = image_to_video(
    image_url="https://example.com/living_room.jpg",
    prompt="Camera slowly panning across the living room, revealing the "
           "kitchen area, natural light from large windows, smooth steadicam"
)

3. Social Media Content#

python
# Vertical format for TikTok/Reels
gen = create_generation(
    prompt="A coffee being poured in slow motion, steam rising, "
           "close-up macro shot, warm tones",
    aspect_ratio="9:16"
)

Production Tips#

  1. Be specific about camera motion — Ray 2 excels when you describe exact camera movements
  2. Use image-to-video for brand consistency — Start from your actual product photos
  3. Combine start + end frames for controlled transitions
  4. Keep prompts under 200 words — Longer prompts don't improve quality
  5. Use loop: true for seamless looping backgrounds and social content

FAQ#

How does Luma Dream Machine compare to Runway Gen-4?#

Luma Ray 2 has better camera control and 3D understanding. Runway Gen-4 has better temporal consistency for longer clips and more natural human motion. For product videos and architectural content, Luma wins. For narrative content with people, Runway edges ahead.

Can I use Luma's API for commercial projects?#

Yes. All paid plans include commercial usage rights. Generated videos are yours to use in any commercial context.

What resolution does Luma Ray 2 support?#

Up to 1080p in 16:9, 9:16, 1:1, 4:3, and 3:4 aspect ratios. The sweet spot for quality-to-cost is 720p for iteration and 1080p for final renders.

How long does generation take?#

Typically 30-60 seconds for a 5-second clip on paid plans. Free tier has lower priority and may take 2-5 minutes. Batch processing during off-peak hours is fastest.

What's the cheapest way to use Luma Ray 2?#

Through Crazyrouter at roughly 50% off direct pricing. You also get automatic fallback to other video models if Luma hits capacity limits.

Summary#

Luma Dream Machine with Ray 2 is the go-to choice for camera-controlled video generation. Its 3D understanding and precise motion control make it ideal for product videos, real estate, and any content where camera work matters. Access it through Crazyrouter for 50% savings and unified access to every other video AI model through one API key.

Related Articles