Seedance 1.5 Pro

ByteDance Seedance 1.5 Pro, 4-12s, 480p-1080p, native audio

Keyframe Mode
0/1
0 chars

No audio

Dynamic camera movement enabled

1

Generate multiple videos at once; credits scale with count

Examples
Background

A subway train rumbles past, pages and the girl's hair flying in the wind. The camera begins a 360-degree orbit around her as the background gradually transforms from a subway station into a medieval cathedral, with Western fantasy-style music fading in. Letters tucked in her book flutter and swirl around her, and by the time the wind-blown pages settle, the entire environment has become a medieval cathedral.

No videos yet. Try entering a prompt to create your first video.

Reference anything. Edit anything. Create anything.

Seedance 2.0 Multi-Modal AI Video Generator

Experience true multi-modal AI video creation. Combine images, videos, audio, and text to generate cinematic content with precise references, seamless video extension, and natural language control.

Multi-Modal InputReference ControlVideo ExtensionBuilt-in AudioWatermark-Free Output

Key Features of Seedance 2.0

A controllable multi-modal model built for production-ready video workflows.

Multi-Modal Input

Upload up to 9 images, 3 videos (15s total), and 3 audio files. Combine multiple modalities with text prompts in one workflow.

Reference Anything

Reference motion, effects, camera movement, character appearance, scene composition, and sound from uploaded assets using natural language.

Extension & Segment Editing

Extend clips smoothly, merge scenes, or edit targeted segments while preserving continuity, style, and timing.

Consistency + Audio

Keep faces, clothing, and style stable across frames while generating contextual sound effects and background music.

Showcase

Explore stunning videos created with Seedance 2.0.

How to Create with Seedance 2.0

From idea to output in three practical steps.

1

Upload Your Assets

Upload images, videos, or audio files as references. You can combine up to 12 assets to define your creative intent.

2

Describe References in Natural Language

Write what to generate and what to reference, for example: 'Use @image1 style with @video1 camera movement and sync to @audio1 beats.'

3

Generate, Extend, and Iterate

Generate short clips, then extend or refine specific parts through iterative edits until the output meets your target.

Why Teams Choose Seedance 2.0

Designed for controllability, speed, and output consistency.

Reference-Driven Control

Unlike prompt-only workflows, Seedance 2.0 uses asset-level references for more deterministic outputs.

Consistent Visual Quality

Maintain stronger identity and style consistency across shots for storytelling, brand, and campaign work.

Flexible Credit Plans

Choose subscriptions or one-time credit packs, with transparent tiers for individuals and production teams.

Built for Iteration

Fast feedback loops help you test concepts, tune prompts, and ship polished videos faster.

What Creators Say About Seedance 2.0

Feedback from teams using Seedance 2.0 in real production workflows.

Reference control is the biggest upgrade for us. We matched lens rhythm from our film clip and got surprisingly consistent output on the first pass.

Marcus Rodriguez

Marcus Rodriguez

Film Producer

I can map complex movement from a reference and apply it to a stylized character. The motion precision is far beyond what we used last year.

Jessica Liu

Jessica Liu

Animation Director

For teaching production concepts, this is incredibly practical. Students can test professional techniques and see usable results immediately.

Dr. Linda Park

Dr. Linda Park

Film Professor

Reference control is the biggest upgrade for us. We matched lens rhythm from our film clip and got surprisingly consistent output on the first pass.

Marcus Rodriguez

Marcus Rodriguez

Film Producer

I can map complex movement from a reference and apply it to a stylized character. The motion precision is far beyond what we used last year.

Jessica Liu

Jessica Liu

Animation Director

For teaching production concepts, this is incredibly practical. Students can test professional techniques and see usable results immediately.

Dr. Linda Park

Dr. Linda Park

Film Professor

Reference control is the biggest upgrade for us. We matched lens rhythm from our film clip and got surprisingly consistent output on the first pass.

Marcus Rodriguez

Marcus Rodriguez

Film Producer

I can map complex movement from a reference and apply it to a stylized character. The motion precision is far beyond what we used last year.

Jessica Liu

Jessica Liu

Animation Director

For teaching production concepts, this is incredibly practical. Students can test professional techniques and see usable results immediately.

Dr. Linda Park

Dr. Linda Park

Film Professor

Reference control is the biggest upgrade for us. We matched lens rhythm from our film clip and got surprisingly consistent output on the first pass.

Marcus Rodriguez

Marcus Rodriguez

Film Producer

I can map complex movement from a reference and apply it to a stylized character. The motion precision is far beyond what we used last year.

Jessica Liu

Jessica Liu

Animation Director

For teaching production concepts, this is incredibly practical. Students can test professional techniques and see usable results immediately.

Dr. Linda Park

Dr. Linda Park

Film Professor

Character consistency finally works across multiple shots. Face details, wardrobe, and overall styling stay stable through the whole sequence.

Emily Watson

Emily Watson

Creative Director

Natural language reference instructions are very intuitive. We describe what to borrow and how to apply it, and the model understands quickly.

Mohammed Hassan

Mohammed Hassan

Digital Artist

We create travel content at higher volume now. Clip extension and camera continuity make series production much easier to maintain.

Sophie Laurent

Sophie Laurent

Travel Content Creator

Character consistency finally works across multiple shots. Face details, wardrobe, and overall styling stay stable through the whole sequence.

Emily Watson

Emily Watson

Creative Director

Natural language reference instructions are very intuitive. We describe what to borrow and how to apply it, and the model understands quickly.

Mohammed Hassan

Mohammed Hassan

Digital Artist

We create travel content at higher volume now. Clip extension and camera continuity make series production much easier to maintain.

Sophie Laurent

Sophie Laurent

Travel Content Creator

Character consistency finally works across multiple shots. Face details, wardrobe, and overall styling stay stable through the whole sequence.

Emily Watson

Emily Watson

Creative Director

Natural language reference instructions are very intuitive. We describe what to borrow and how to apply it, and the model understands quickly.

Mohammed Hassan

Mohammed Hassan

Digital Artist

We create travel content at higher volume now. Clip extension and camera continuity make series production much easier to maintain.

Sophie Laurent

Sophie Laurent

Travel Content Creator

Character consistency finally works across multiple shots. Face details, wardrobe, and overall styling stay stable through the whole sequence.

Emily Watson

Emily Watson

Creative Director

Natural language reference instructions are very intuitive. We describe what to borrow and how to apply it, and the model understands quickly.

Mohammed Hassan

Mohammed Hassan

Digital Artist

We create travel content at higher volume now. Clip extension and camera continuity make series production much easier to maintain.

Sophie Laurent

Sophie Laurent

Travel Content Creator

Frequently Asked Questions About Seedance 2.0

Core questions about capabilities, references, and workflow.

Seedance 2.0 is a multi-modal AI video model that supports image, video, audio, and text inputs for highly controllable video generation.

Start Creating with Seedance 2.0

Build controllable multi-modal AI videos for marketing, storytelling, and professional workflows.