Seedance 2.0
View API docsSeedance 2.0 creates and edits cinematic video from natural-language prompts with text, image, video, and audio references. Reference media can preserve subject identity, composition, and motion while the model rewrites lighting, style, environments, weather, camera feel, or specific scene elements with stable motion and polished visual quality.
Built on ByteDance Seed's unified multimodal architecture and delivered through a ready-to-use REST API, it is designed for fast production workflows, reliable performance, no cold starts, and affordable scaling.
Seedance 2.0
Seedance 2.0 multi-modal video generation. Supports text, image, video, and audio references with configurable duration (4-15s), resolution (480p/720p/1080p), and aspect ratio. · seedance
Model capabilities
Reference-driven video generation for production workflows.
Seedance 2.0 is built for short-form generation where prompts and uploaded references work together: first frames, end frames, style images, motion clips, and rhythm references can all guide the final output.
Reference-guided creation
Use product shots, character images, first and last frames, or style references to control visual direction.
Motion and camera transfer
Attach short reference clips so the model can follow movement, composition, rhythm, and transition intent.
Audio-aware generation
Provide music or sound references, or enable synchronized audio generation for richer short-form output.
Multi-shot storytelling
Create short cinematic sequences with stronger scene flow, subject consistency, and production-ready pacing.
API workflow
One endpoint, async delivery, clean task status.
Submit a generation task
Send prompt, references, duration, aspect ratio, resolution, and callback URL to the video generation endpoint.
Track async progress
Use the returned task ID to poll status, or let the callback deliver completion and failure events.
Receive hosted output
Completed tasks return mirrored video URLs and metadata through the same response shape used by other models.
curl -X POST https://api.aivideoapi.ai/v1/videos/generations \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "doubao-seedance-2.0",
"callback_url": "https://your-app.com/webhooks/video",
"input": {
"prompt": "A cinematic product reveal with smooth camera movement",
"image_urls": ["https://example.com/product.png"],
"duration": 5,
"aspect_ratio": "16:9",
"resolution": "720p",
"generate_audio": true
}
}'Model options
Standard quality or faster iteration.
Use `doubao-seedance-2.0` for the full resolution range. Use `doubao-seedance-2.0-fast` when you want lower-cost drafts and rapid batch testing.
Seedance 2.0
Best for production output, 1080p clips, and final creative runs.
Seedance 2.0 Fast
Optimized for faster, lower-cost exploration at 480p and 720p.
What teams build
From reference assets to finished clips.
The same API supports lightweight text-to-video prompts and reference-heavy creative pipelines for apps, ad tools, and automated content systems.
Start with Seedance 2.0 in the same API you use for other video models.
Create an API key, submit a task, and route generated outputs into your product without building provider-specific plumbing.