Quick Start
Go from zero to generating an AI video in about 5 minutes.
Critical upfront: Seedance 2.0 has 6 models, not 1. There is no "automatic mode detection" — you must pick the correct
modelvalue based on your input type. See the full matrix in Models Overview.
Prerequisites
- An EvoLink account (Sign up free)
- An API key from your API Key Management Page
- Any HTTP client (cURL, Python, Node.js, etc.)
Base URL
https://api.evolink.ai
Step 1: Save Your API Key
export EVOLINK_API_KEY="your-api-key-here"
Step 2: Pick the Right Model
Based on what you have:
| Your input | Model to use |
|---|---|
| Text prompt only | seedance-2.0-text-to-video |
| 1–2 reference images | seedance-2.0-image-to-video |
| Images + videos + audio (multimodal) | seedance-2.0-reference-to-video |
Need faster generation and lower cost? Add the fast- prefix to any of the above, e.g. seedance-2.0-fast-text-to-video. See Fast Models.
Step 3: Make Your First Request
Here's a text-to-video example:
import os
import requests
response = requests.post(
"https://api.evolink.ai/v1/videos/generations",
headers={
"Authorization": f"Bearer {os.environ['EVOLINK_API_KEY']}",
"Content-Type": "application/json"
},
json={
"model": "seedance-2.0-text-to-video",
"prompt": "A golden retriever running through a sunlit meadow, cinematic slow motion",
"duration": 5,
"quality": "720p",
"aspect_ratio": "16:9"
}
)
task = response.json()
print(f"Task ID: {task['id']}")
print(f"Status: {task['status']}")
Response (HTTP 200)
{
"id": "task-unified-1774857405-abc123",
"object": "video.generation.task",
"created": 1774857405,
"model": "seedance-2.0-text-to-video",
"status": "pending",
"progress": 0,
"type": "video",
"task_info": {
"can_cancel": true,
"estimated_time": 165,
"video_duration": 5
},
"usage": {
"billing_rule": "per_second",
"credits_reserved": 50,
"user_group": "default"
}
}
Note that billing_rule is always per_second — longer duration values cost more.
Step 4: Poll for the Video
Every generation request is asynchronous. Use the task ID to poll for the result:
import time
task_id = task["id"]
while True:
status = requests.get(
f"https://api.evolink.ai/v1/tasks/{task_id}",
headers={"Authorization": f"Bearer {os.environ['EVOLINK_API_KEY']}"}
)
result = status.json()
if result["status"] == "completed":
print(f"Video URL: {result['results'][0]}")
break
if result["status"] == "failed":
print("Generation failed")
break
print(f"Progress: {result['progress']}%")
time.sleep(5)
Important: Generated video URLs are valid for 24 hours. Download and save them to your own storage promptly.
For production use, prefer Webhooks (callback_url) over polling.
Common Pitfalls
| Symptom | Cause | Fix |
|---|---|---|
400 invalid_request on model | You wrote "model": "seedance-2.0" | Use the full model ID, e.g. seedance-2.0-text-to-video |
Error when passing image_urls | text-to-video doesn't accept media inputs | Use seedance-2.0-image-to-video |
quality: "1080p" rejected | 1080p is not supported | Use 480p or 720p |
| Image > 30 MB or > 6000 px rejected | Exceeds image input limits | Compress to the allowed range (see image-to-video) |
Next Steps
- Models Overview — Full 6-model matrix and decision tree
- Authentication — Bearer token details
- Text-to-Video / Image-to-Video / Reference-to-Video
- Async Tasks — Polling endpoint details
- Webhooks —
callback_urlreal-time notifications