SDKs & Code Examples
The Seedance 2.0 API uses a standard REST interface and can be called from any HTTP client — no SDK required. This page provides ready-to-copy code for all three generation modes.
Base URL
https://api.evolink.ai
All examples assume
export EVOLINK_API_KEY="your-api-key-here".
Text-to-Video
Python
import os
import time
import requests
API_KEY = os.environ["EVOLINK_API_KEY"]
BASE_URL = "https://api.evolink.ai"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
# 1. Create task
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-text-to-video",
"prompt": "A cinematic sunset over the ocean, wide shot",
"duration": 5,
"quality": "720p",
"aspect_ratio": "16:9"
}
)
task_id = response.json()["id"]
print(f"Task created: {task_id}")
# 2. Poll
while True:
result = requests.get(f"{BASE_URL}/v1/tasks/{task_id}", headers=headers).json()
if result["status"] == "completed":
print(f"Video URL: {result['results'][0]}")
break
if result["status"] == "failed":
print("Generation failed")
break
print(f"Progress: {result['progress']}%")
time.sleep(5)
Node.js
const API_KEY = process.env.EVOLINK_API_KEY;
const BASE_URL = "https://api.evolink.ai";
const headers = {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json"
};
// 1. Create task
const createRes = await fetch(`${BASE_URL}/v1/videos/generations`, {
method: "POST",
headers,
body: JSON.stringify({
model: "seedance-2.0-text-to-video",
prompt: "A cinematic sunset over the ocean, wide shot",
duration: 5,
quality: "720p",
aspect_ratio: "16:9"
})
});
const { id: taskId } = await createRes.json();
console.log(`Task created: ${taskId}`);
// 2. Poll
while (true) {
const res = await fetch(`${BASE_URL}/v1/tasks/${taskId}`, { headers });
const result = await res.json();
if (result.status === "completed") {
console.log(`Video URL: ${result.results[0]}`);
break;
}
if (result.status === "failed") {
console.log("Generation failed");
break;
}
console.log(`Progress: ${result.progress}%`);
await new Promise(r => setTimeout(r, 5000));
}
cURL
# 1. Create task
curl -X POST https://api.evolink.ai/v1/videos/generations \
-H "Authorization: Bearer $EVOLINK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "seedance-2.0-text-to-video",
"prompt": "A cinematic sunset over the ocean, wide shot",
"duration": 5,
"quality": "720p"
}'
# Response: {"id": "task-unified-...", "status": "pending", ...}
# 2. Query status
curl https://api.evolink.ai/v1/tasks/TASK_ID \
-H "Authorization: Bearer $EVOLINK_API_KEY"
Image-to-Video
First-frame mode (1 image)
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-image-to-video",
"prompt": "The model slowly turns, hair flowing gently in the wind",
"image_urls": ["https://example.com/portrait.jpg"],
"duration": 5,
"quality": "720p",
"aspect_ratio": "adaptive"
}
)
First-last-frame transition (2 images)
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-image-to-video",
"prompt": "A smooth transition from sunrise to sunset over the same ocean",
"image_urls": [
"https://example.com/sunrise.jpg",
"https://example.com/sunset.jpg"
],
"duration": 6,
"quality": "720p"
}
)
Reference-to-Video
Use image, video, and audio reference assets in a single request:
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-reference-to-video",
"prompt": (
"Replicate video 1's first-person perspective and pacing; "
"use audio 1 as background music throughout. "
"Scene: a young rider weaving through rain-soaked city streets "
"at night, neon reflections on wet asphalt."
),
"image_urls": ["https://example.com/rider-style.jpg"],
"video_urls": ["https://example.com/pov-reference.mp4"],
"audio_urls": ["https://example.com/synthwave-bgm.mp3"],
"duration": 10,
"quality": "720p",
"aspect_ratio": "16:9"
}
)
Note:
reference-to-videohas no@Image1,@Video1-style tag syntax. Describe each asset's role in plain natural language.
Using Fast Models
Swap the model field from seedance-2.0-xxx to seedance-2.0-fast-xxx — all other parameters stay the same:
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-fast-text-to-video", # ← only change
"prompt": "A cinematic sunset over the ocean, wide shot",
"duration": 5,
"quality": "720p"
}
)
See Fast Models.
Webhooks Instead of Polling
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-text-to-video",
"prompt": "A cat playing piano",
"duration": 5,
"callback_url": "https://yourapp.com/api/video-callback"
}
)
# Your webhook endpoint receives a POST with the same body shape as the task query endpoint
See Webhooks.
OpenAI-Style Conventions
The API follows OpenAI-style REST conventions (Bearer token, JSON body, unified response schema). Use any HTTP client library — no dedicated SDK required.